Particle Physics Planet


April 28, 2015

astrobites - astro-ph reader's digest

Why is star formation so inefficient?

Why easy when you can make it complicated?

The historic picture of star formation (SF), that interstellar gas directly collapses due to its own gravity, bears problems of oversimplicity. Incorporating this view into simulations of the process results in a crazy high star formation efficiency, which means that much too much gas is turned into stars. A long time, it was thought that magnetic effects like ambipolar diffusion, in which charged particles coupled to the magnetic field slow down uncharged ones by collisions, hinder the effectivity of the star formation process and reduce the star formation rate to levels observed. However, this picture requires the centers of molecular cloud cores, where stars are created, to be extremely dense and their envelopes to feature high magnetic fluxes, which is unfortunately not observed. Thus, there must be other mechanisms helping to suppress that all gas gets turned into stars quickly.

Turbulence and other stuff…

Several mechanisms were proposed to overcome the problem of the magnetic influence and simultaneously diminish star formation rates to realistic values. In this study the author runs magnetohydrodynamical (gas + magnetic fields) simulations of star formation with the FLASH code and tests the influence of different mechanisms on altering the star formation process. The effects incorporated in the simulations are:

In the simulations performed one physical mechanism is added each time, starting from a relatively simple simulation to a more and more complex one with all ingredients turned on. The following video (created by C. Federrath, make sure to watch the video in HD) shows the density of gas along our line of sight, indicated from white (low density) to blue (intermediate density) and yellow (high density). Stars that have been successfully formed are shown as white dots. Each box represents a simulation with a specific combination of the above mechanisms (check the descriptions in the top left).

So what do we see? Obviously, in the simulation with gravity only (top left) a lot of stars pop up extremely quickly. Adding more and more mechanisms from the above list decreases the star formation efficiency and star formation rate, as nicely visualised in Figure 1.

Description

Figure 1: Rate of star formation versus time including different physical mechanisms. The black dashed region indicates the area which is constrained from observations. The SFR is much too high for more simple simulations. Adding more complexity by considering turbulence, magnetic fields and stellar feedback by jets and outflows decreases it to more realistic regions. The sudden drop in SFR at t ~ 4 for the black line is supposed to relate to self-regulation of the feedback mechanism (see text). Source: Federrath (2015)

As it seems, each of the ingredients used in the more and more complex simulations reduce the SFR by about a factor two!  This means, turbulence, magnetic fields and stellar feedback seem to contribute approximately equally to the needed decrease in the SFR. Additionally, the sudden drop in SFR in the most complex simulation shows signs of an emergence phenomenon: After an initial burst in star formation by the collapsing gas the jets and outflows of the forming stars trigger turbulence in the surviving gas, which hinders further fragmentation. From that point onwards the SFR stays rather constant, which can be seen as a self-regulation mechanism.

Reality check and ways to go

The most notable result from this work is that the simulations reached a star formation rate similar to the rate observed in molecular clouds . The most complex simulation features an SFR ~ 0.04, in comparison with SFR ~ 0.01 for observations. From that, the author concludes that the role of turbulence and magnetic fields are likely higher than speculated by recent computational results, which underlined the importance of stellar feedback but were not able to reach values as low as this. Eventually, the author argues that further reduction of the SFR in theoretical studies needs to consider other types of feedback like done in this study. A possible candidate here might be radiation pressure, which is the influence of stellar irradiation on the surrounding gas.

by Tim Lichtenberg at April 28, 2015 09:51 AM

April 27, 2015

Christian P. Robert - xi'an's og

extending ABC to high dimensions via Gaussian copula

plane2Li, Nott, Fan, and Sisson arXived last week a new paper on ABC methodology that I read on my way to Warwick this morning. The central idea in the paper is (i) to estimate marginal posterior densities for the components of the model parameter by non-parametric means; and (ii) to consider all pairs of components to deduce the correlation matrix R of the Gaussian (pdf) transform of the pairwise rank statistic. From those two low-dimensional estimates, the authors derive a joint Gaussian-copula distribution by using inverse  pdf transforms and the correlation matrix R, to end up with a meta-Gaussian representation

f(\theta)=\dfrac{1}{|R|^{1/2}}\exp\{\eta^\prime(I-R^{-1})\eta/2\}\prod_{i=1}^p g_i(\theta_i)

where the η’s are the Gaussian transforms of the inverse-cdf transforms of the θ’s,that is,

\eta_i=\Phi^{-1}(G_i(\theta_i))

Or rather

\eta_i=\Phi^{-1}(\hat{G}_i(\theta_i))

given that the g’s are estimated.

This is obviously an approximation of the joint in that, even in the most favourable case when the g’s are perfectly estimated, and thus the components perfectly Gaussian, the joint is not necessarily Gaussian… But it sounds quite interesting, provided the cost of running all those transforms is not overwhelming. For instance, if the g’s are kernel density estimators, they involve sums of possibly a large number of terms.

One thing that bothers me in the approach, albeit mostly at a conceptual level for I realise the practical appeal is the use of different summary statistics for approximating different uni- and bi-dimensional marginals. This makes for an incoherent joint distribution, again at a conceptual level as I do not see immediate practical consequences… Those local summaries also have to be identified, component by component, which adds another level of computational cost to the approach, even when using a semi-automatic approach as in Fernhead and Prangle (2012). Although the whole algorithm relies on a single reference table.

The examples in the paper are (i) the banana shaped “Gaussian” distribution of Haario et al. (1999) that we used in our PMC papers, with a twist; and (ii) a g-and-k quantile distribution. The twist in the banana (!) is that the banana distribution is the prior associated with the mean of a Gaussian observation. In that case, the meta-Gaussian representation seems to hold almost perfectly, even in p=50 dimensions. (If I remember correctly, the hard part in analysing the banana distribution was reaching the tails, which are extremely elongated in at least one direction.) For the g-and-k quantile distribution, the same holds, even for a regular ABC. What seems to be of further interest would be to exhibit examples where the meta-Gaussian is clearly an approximation. If such cases exist.


Filed under: Books, pictures, Statistics, Travel, Uncategorized, University life Tagged: ABC, copulas, population Monte Carlo, quantile distribution

by xi'an at April 27, 2015 10:15 PM

Emily Lakdawalla - The Planetary Society Blog

Bill Nye’s Earth Day Visit with the President of the United States
Last week, our CEO Bill Nye joined The President of the United States for an Earth Day visit to The Everglades, one of the country's renowned National Parks and a vital global ecosystem. The Washington Post covered the news, and we at The Planetary Society shared in the excitement.

April 27, 2015 07:41 PM

Peter Coles - In the Dark

Small Business letter to the Telegraph; an attempt to defraud the electorate?

telescoper:

This unravelling story shows that the Conservative Party’s campaign is both inept and dishonest. Initially I though it was hilarious but now it’s getting very serious indeed.

Originally posted on sturdyblog:

How the letter from small business owners to the Telegraph in support of the Tories fell apart

There is a lot, so I’ll be brief.

Huge thanks to the many people on Twitter who sent me discrepancies all day, as they discovered them.

The day started with the Conservatives and the Prime Minister claiming a major victory.

20150427-161316.jpg

Things soon began to unravel, when it emerged that this wasn’t the unsolicited, spontaneous combustion of love from small business to the Tories, which had been presented. In fact the Conservative Party had generated the letter and asked its members to sign it.

20150427-161016.jpg

Things got much more tangled up when it was discovered that the background document, containing the names and signatures of the “small business owners” on the Telegraph website, still bore the metadata tags of Conservative Campaign Headquarters.

20150427-161329.jpg

Say what you want, claimed a Tory councillor to me. The source is not important. What is…

View original 922 more words


by telescoper at April 27, 2015 07:18 PM

Peter Coles - In the Dark

Astronomy and Forensic Science – The Herschel Connection

When I was in Bath on Friday evening I made a point of visiting the Herschel Museum, which is located in the house in which Sir William Herschel lived for a time, before moving to Slough.
image

Unfortunately I got there too late to go inside. It did remind me however of an interesting connection between astronomy and forensic science, through a certain William Herschel..

When I give popular talks about Cosmology,  I sometimes look for appropriate analogies or metaphors in detective fiction or television programmes about forensic science. I think cosmology is methodologically similar to forensic science because it is generally necessary in both these fields to proceed by observation and inference, rather than experiment and deduction: cosmologists have only one Universe;  forensic scientists have only one scene of the crime. They can collect trace evidence, look for fingerprints, establish or falsify alibis, and so on. But they can’t do what a laboratory physicist or chemist would typically try to do: perform a series of similar experimental crimes under slightly different physical conditions. What we have to do in cosmology is the same as what detectives do when pursuing an investigation: make inferences and deductions within the framework of a hypothesis that we continually subject to empirical test. This process carries on until reasonable doubt is exhausted, if that ever happens.

Of course there is much more pressure on detectives to prove guilt than there is on cosmologists to establish the truth about our Cosmos. That’s just as well, because there is still a very great deal we do not know about how the Universe works. I have a feeling that I’ve stretched this analogy to breaking point but at least it provides some kind of excuse for mentioning the Herschel connection.

In fact the Herschel connection comes through William James Herschel, the grandson of William Herschel and the eldest son of John Herschel, both of whom were eminent astronomers. William James Herschel was not an astronomer, but an important figure in the colonial establishment in India. In the context relevant to this post, however, his claim to fame is that he is credited with being the first European to have recognized the importance of fingerprints for the purposes of identifying individuals. William James Herschel started using fingerprints in this way in India in 1858; some examples are shown below (taken from the wikipedia page).

Fingerprints_taken_by_William_James_Herschel_1859-1860

Later,  in 1877 at Hooghly (near Calcutta) he instituted the use of fingerprints on contracts and deeds to prevent the then-rampant repudiation of signatures and he registered government pensioners’ fingerprints to prevent the collection of money by relatives after a pensioner’s death. Herschel also fingerprinted prisoners upon sentencing to prevent various frauds that were attempted in order to avoid serving a prison sentence.

The use of fingerprints in solving crimes was to come much later, but there’s no doubt that Herschel’s work on this was an important step.


by telescoper at April 27, 2015 01:59 PM

Peter Coles - In the Dark

We have a Beautiful Cosmos

On the bus coming up to campus just now, I was looking through the Brighton Festival (which starts on 2nd May) and found that there is a show called The Beautiful Cosmos of Ivor Cutler, which is on at the Theatre Royal. As a devout fan of Ivor Cutler I’ll definitely be going, but in the mean time here is the title track (set to video…)

And here be the lyrics:

You are the centre of your little world
and I am of mine.
No one again we meet for tea
we’re two of a kind.

This is our universe…
cups of tea.
We have a beautiful cosmos,
you and me.
We have a beautiful cosmos.

What do we talk of whenever we meet:
nothing at all.
You sit with a sandwich,
I look at a roll.
Sometimes I open my mouth,
then shut it.

We have a beautiful cosmos,
you and me.
We have a beautiful cosmos.

You are the centre of your little world
and I am of mine.
No one again we meet for tea
we’re two of a kind.

This is our universe…
cups of tea.
We have a beautiful cosmos,
you and me.
We have a beautiful cosmos.


by telescoper at April 27, 2015 07:46 AM

April 26, 2015

Jester - Resonaances

Weekend plot: dark photon update
Here is a late weekend plot with new limits on the dark photon parameter space:

The dark photon is a hypothetical massive spin-1 boson mixing with the ordinary photon. The minimal model is fully characterized by just 2 parameters: the mass mA' and the mixing angle ε. This scenario is probed by several different experiments using completely different techniques.  It is interesting to observe how quickly the experimental constraints have been improving in the recent years. The latest update appeared a month ago thanks to the NA48 collaboration. NA48/2 was an experiment a decade ago at CERN devoted to studying CP violation in kaons. Kaons can decay to neutral pions, and the latter can be recycled into a nice probe of dark photons.  Most often,  π0 decays to two photons. If the dark photon is lighter than 135 MeV, one of the photons can mix into an on-shell dark photon, which in turn can decay into an electron and a positron. Therefore,  NA48 analyzed the π0 → γ e+ e-  decays in their dataset. Such pion decays occur also in the Standard Model, with an off-shell photon instead of a dark photon in the intermediate state.  However, the presence of the dark photon would produce a peak in the invariant mass spectrum of the e+ e- pair on top of the smooth Standard Model background. Failure to see a significant peak allows one to set limits on the dark photon parameter space, see the dripping blood region in the plot.

So, another cute experiment bites into the dark photon parameter space.  After this update, one can robustly conclude that the mixing angle in the minimal model has to be less than 0.001 as long as the dark photon is lighter than 10 GeV. This is by itself not very revealing, because there is no  theoretically preferred value of  ε or mA'.  However, one interesting consequence the NA48 result is that it closes the window where the minimal model can explain the 3σ excess in the muon anomalous magnetic moment.

by Jester (noreply@blogger.com) at April 26, 2015 11:55 PM

Christian P. Robert - xi'an's og

probabilistic numerics

sunwar2I attended an highly unusual workshop while in Warwick last week. Unusual for me, obviously. It was about probabilistic numerics, i.e., the use of probabilistic or stochastic arguments in the numerical resolution of (possibly) deterministic problems. The notion in this approach is fairly Bayesian in that it makes use to prior information or belief about the quantity of interest, e.g., a function, to construct an usually Gaussian process prior and derive both an estimator that is identical to a numerical method (e.g., Runge-Kutta or trapezoidal integration) and uncertainty or variability around this estimator. While I did not grasp much more than the classy introduction talk by Philipp Hennig, this concept sounds fairly interesting, if only because of the Bayesian connection, and I wonder if we will soon see a probability numerics section at ISBA! More seriously, placing priors on functions or functionals is a highly formal perspective (as in Bayesian non-parametrics) and it makes me wonder how much of the data (evaluation of a function at a given set of points) and how much of the prior is reflected in the output [variability]. (Obviously, one could also ask a similar question for statistical analyses!)  For instance, issues of singularity arise among those stochastic process priors.

Another question that stemmed from this talk is whether or not more efficient numerical methods can derived that way, in addition to recovering the most classical ones. Somewhat, somehow, given the idealised nature of the prior, it feels like priors could be more easily compared or ranked than in classical statistical problems. Since the aim is to figure out the value of an integral or the solution to an ODE. (Or maybe not, since again almost the same could be said about estimating a normal mean.)


Filed under: pictures, Running, Statistics, Travel, University life Tagged: Bayesian statistics, Brownian motion, Coventry, CRiSM, Gaussian processes, numerical analysis, numerical integration, Persi Diaconis, probability theory, Runge-Kutta, stochastic processes, sunrise, trapezoidal approximation, University of Warwick, Warwickshire, workshop

by xi'an at April 26, 2015 10:15 PM

Christian P. Robert - xi'an's og

Peter Coles - In the Dark

Could the SNP block a Labour Budget? No.

telescoper:

Interesting post about the constitutional limits on the ability of the SNP to influence UK budget setting.

Originally posted on Colin Talbot:

The SNP are claiming they can ‘block Labour budgets’, ‘end austerity’ and ‘stop Trident’. Their problem however is simple – most of what they say is based on assuming that Westminster works the same way as Holyrood does for budgeting – and it doesn’t. There are huge ‘constitutional’ and practical obstacles to implementing the sort of radical challenges to Government tax and spend decisions that the SNP and others seem to be mooting. The first set of problems is that in the Westminster parliament only the Government can propose taxation or spending measures. These can be defeated, or amended, but only by cutting spending or lowering or removing taxes – not by increasing either.

View original 1,076 more words


by telescoper at April 26, 2015 01:17 PM

Tommaso Dorigo - Scientificblogging

Offenbach in Athens
"La belle Hélène" is a beautiful operetta by Jacques Offenbach. Now for the first time it has been translated and performed in Greek in Athens, by a group of very talented singers under the artistic direction of Panagiotis Adam. I saw "Η Ωράια Ελένη" yesterday at the Olvio theatre in Athens, and I enjoyed it a whole lot. 

The story unfolds as Eleni, the wife of Sparta's king Menelaos, lives in a world where men only concern with warfare and neglect love. As Paris, the prince of Troy, arrives disguised as a sheperd, and catches her attention. Eleni's flirt with Paris is discovered by Menelaos, but the two manage to escape together.

read more

by Tommaso Dorigo at April 26, 2015 08:08 AM

Lubos Motl - string vacua and pheno

People's evolving opinions about quantum gravity
About nine years ago, a movement trying to (largely or entirely) "replace" string theory research with would-be "competitors" culminated. Unproductive critics and third-class researchers disconnected from the last 30 years in physics were often marketed as peer of top string theorists – and sometimes as something better.

However, aside from the cheap anti-science populism, there has never been any substance in their claims, and one can't really run on "promises" indefinitely. For a while, the theory group at the Perimeter Institute operated as a fan club of Lee Smolin's of a sort – a warrior in the "string wars". Thankfully, "string wars" are over and the crackpots have lost. Unfortunately, they have been replaced by lots of other nonsense. Did this replacement make things better or worse? I don't know.




To return to the positive news, let me mention that after many decades, Sabine Hossenfelder – once a person working at the periphery of the loop quantum gravity community – has understood a lethal problem with the Poincaré-invariant networks.




The correct insight is that the Minkowski space can't really be represented by a network or graph that would fill it. The very same argument (with very similar pictures) has been written on this blog many times since 2004 – see e.g. the second myth, "a structure of links or surfaces filling a Minkowski space may be Lorentz-invariant, at least statistically", in the 2009 TRF blog post about myths about the minimal length.

I won't repeat it here but it's trivial to see that every quasi-uniform network drawn on a paper inevitably picks a preferred reference frame – the Lorentz boosts acting on the network inevitably make it look "skewed" and "different". So a theory representing the space by the network breaks the Lorentz symmetry – pretty much by 100 percent or so. Consequently, spin foams and all similar discrete theories of quantum gravity may be falsified (by pointing to the successful tests of the Lorentz invariance) within a few seconds, assuming that the falsifier is adequately competent and fast. Now, decades later, people like Hossenfelder started to get the point, too.

(Just to be sure, I don't claim to be the first one who invented the argument, although I hadn't heard it in the same clear form before.)

Second. The Perimeter Institute used to be a center of the "bogus research" of quantum gravity. Fortunately, real physics research was gradually strengthening over there in the last 10-15 years. So three days ago, Witten gave a PI lecture on superstring perturbation theory, a way to calculate the stringy S-matrix using the super world sheets (with Grassmannian coordinates).

It is a nice piece of technology but the talk is very technical so I won't recommend it to non-experts. However, I must mention that I liked his points about the vertex operators' being localized on divisors etc. because I have spent lots of time by thinking about the mysterious duality – and generalizations of string theory to 2+2-dimensional membranes – and the conclusion that the vertex operators should be associated with divisors seemed to be rather solid. I should write an update about these matters at some point.

PR, ER=EPR, fuzzballs vs AMPS firewalls

Meanwhile, a new controversy has erupted in quantum gravity – in the serious (at least before a split?) quantum gravity done by people who are familiar at least with the basic lessons from string/M-theory, and most of them are great string theorists. No, I don't mean the "gravity as the entropic force" nonsense that is hopefully dead by now as well (much like the spin foams above) – although the dying still took about 50 times longer time than it should have.

I mean the controversies about the "firewall" claims by Polchinski and collaborators, and positive insights that were made as part of the argumentation that the firewalls don't really exist.

The cover story of the April 2015 issue of Scientific American is hyping firewalls and Joe Polchinski himself wrote this embarrassing, self-glorifying yet completely wrong, article. I've written some critiques before. But I await a guest blog about the matters and fuzzballs by Samir Mathur – who is probably also rightfully upset that some of his decade-old claims are being attributed to Polchinski and pals.

But this deeply flawed pro-firewall propaganda makes it to many articles that are otherwise avoiding it, too. So the Quanta Magazine wrote the first article in a series about the ER-EPR correspondence
Wormholes Untangle a Black Hole Paradox
K.C. Cole wrote a pretty clear exposition of the proposed universal equivalence between the wormholes and the quantum entanglement. The article also contains an amusing historical anecdote about the birth of ER=EPR. How was it born? Well, on one sunny day, the master (Susskind's nickname for Juan Maldacena) wrote a cryptical e-mail to Susskind that said ER=EPR. Susskind read the e-mail and immediately saw where it was going, he decided it was correct, so he agreed to put his name on the preprint. Nice done, Gentlemen! ;-)

I must say that the origin of my much less well-known joint paper with Susskind was the same with one M-name replaced by another. :-)

There are some reactions to ER=EPR mentioned in Cole's article. Preskill tries to be characteristically neutral and talks about uncertain smells. Shenker is supportive of the "big new insight". On the other hand, Polchinski and Marolf, the main guys behind the AMPS firewall meme, do their best to kick into Maldacena's and Susskind's insight while looking like Gentlemen.

The two last paragraphs of Cole's article are dedicated to Marolf's criticism of ER=EPR which I find vague enough not to be "totally sharply and rigorously wrong" but which is still weird and at least morally wrong if you realize what he is saying in between the lines:
To be sure, ER = EPR does not yet apply to just any kind of space, or any kind of entanglement. It takes a special type of entanglement and a special type of wormhole. “Lenny and Juan are completely aware of this,” said Marolf, who recently co-authored a paper describing wormholes with more than two ends. ER = EPR works in very specific situations, he said, but AMPS argues that the firewall presents a much broader challenge.

Like Polchinski and others, Marolf worries that ER = EPR modifies standard quantum mechanics. “A lot of people are really interested in the ER = EPR conjecture,” said Marolf. “But there’s a sense that no one but Lenny and Juan really understand what it is.” Still, “it’s an interesting time to be in the field.”
Please, Don!

First of all, it is complete nonsense that ER=EPR only applies to one special case "so far". ER=EPR is one of the insights that – assuming that they are right, and I think that the case is extremely strong – apply as generally as you can get. The degree of generality is similar to Joule's equivalence of heat and work – or, indeed, as Noether's theorem. The claim really is that any entanglement is some kind of a wormhole; and any physically allowed wormhole may be equivalently described as a spacetime without a wormhole but with near-maximally entangled local degrees of freedom.

What could have Marolf meant by the wormhole's being very special? That it is an Einstein-Rosen bridge and not a traversable wormhole? It has to be so. This is a part of the insight. A traversable wormhole would imply massive non-locality (violation of special relativity) – and this is forbidden in both descriptions, one with the wormhole and one with the entanglement. So the choice of the non-traversable wormholes isn't a sign of any incompleteness of the proposal. It is a detail that is makes Maldacena's and Susskind's proposal much more specific and bold. Traversable wormholes are almost certainly prohibited by the laws of physics – and Maldacena and Susskind confirm this expectation while a new argument for the non-existence of traversable wormholes arises as a corollary of their work.

Otherwise, the insight works in many spacetimes where the wormholes have technically different shapes – and different number of dimensions, among other things. They also claim that a description for "excited" wormholes exists on both sides, too. Tiny amounts of entanglement don't allow the wormholes to be big and smooth. But that's not a defect of the proposal, either. It's another bold prediction that follows from the ER=EPR line of reasoning.

If wormholes with many throats etc. are allowed as well, there probably exists some very special kind of entanglement of the three systems that describes the object, too.

But it's really the comparison of the generality of ER=EPR and of AMPS that sounds crazy. Marolf claims that AMPS is much more general. Well, it's not. It's exactly the other way around. ER=EPR is supposed to be relevant and possible for any black hole interior – it may be connected elsewhere without spoiling the overall outside appearance of the black hole. On the other hand, AMPS is as special as a physically wrong claim may be. It is extremely special because the firewall is only derived for theories where the black hole complementarity is forbidden as a strict assumption, where the exact locality holds, and where the field operators are state-independent. With these very strong assumptions, you end up with Polchinski-like contradictions. But the assumptions are not obeyed in string theory – in consistent theories of quantum gravity – which makes their range validity pretty much zero for all practical purposes.

The claim that "AMPS is more general than ER=EPR" is at least morally untrue. It is hard to decide how you define the "degree of generality" for two qualitatively different hypotheses that disagree with one another (which implies that at most one of them is actually right) and want to organize quantum gravity in different ways. But if I try to define the "degree of generality" in any sensible way, it's clear that ER=EPR is much more general.

Marolf's (and Polchinski's) assertion that "ER=EPR seems to violate the posulates of quantum mechanics" is preposterous, too. How could it violate the general rules of quantum mechanics? It's constructed within these rules from the very beginning. One has the quantum mechanical description of the microstates of two identical but separated black holes (pretty much a tensor product of two copies of the same Hilbert space of microstates). And one simply claims that some entangled basis vectors in this product Hilbert space may be given new labels, as "simple" microstates of an Einstein-Rosen bridge. How could it violate anything about quantum mechanics? It's really just a collection of new labels for some vectors in the Hilbert space. A way to define new observables – field operators in the wormhole's interior – that are complementary to (i.e. non-commuting with) the usual field operators in two black hole interiors. From the birth of quantum mechanics, its power to allow, encourage, or force us to use superpositions of states (e.g. the entangled states) – and see that many of them are eigenstates of rather natural operators – has been one of the most characteristic changes that quantum mechanics represented. Quantum mechanics is all about the non-commuting operators, stupid!

This ER-EPR correspondence is a non-vacuous hypothesis – there has to exist some Hamiltonian or evolution in both pictures that agrees with both descriptions (that share the idea about the evolution of the exterior of the black holes or the wormhole). But this test is a dynamical question, not one that could change anything about the fact that Maldacena and Susskind operate within the totally standard quantum mechanical framework at every moment of their research.

I think that by now, it should be clear that Raju and Papadodimas' insights about the unavoidable state dependence are the key – and it is the key that is misunderstood by Marolf, Polchinski, and others. Even in ER=EPR, I just said that what the field operators are depends on which "corner of the Hilbert space" you want to describe. For the non-entangled corners, you pick the field operators describing two interiors. For the maximally entangled corners, field operators within an ER bridge are a better description (in ER=EPR, the state dependence is "manifest" because the two regions of the Hilbert space invite you to use two inequivalent sets of field operators because the spacetime topologies are different). The modest claim here is that the local field operators always have a "limited range of validity" on the Hilbert space. But that shouldn't be controversial. That's an aspect of backreaction or the impossibility to describe quantum gravity in a manifestly local way.

Finally, Marolf claims that "no one but Juan and Lenny understand ER=EPR". Now, this is just a plain lie. Perhaps a cute lie – because it mimics the bizarre claims that general relativity was only understood by 12 people in the world. But it is a lie, anyway. I surely do claim to understand the content of the claim as well as Maldacena and Susskind, and so do numerous people who have written about it.

In less than 2 years, the ER=EPR paper has 150 citations and most of them seem to understand what Maldacena and Susskind say and why. 150 isn't astronomical but it's almost the same rate – 320 citations in 3 years – that the AMPS firewall paper has.

If Marolf was really talking about sociology, I think that if you look at the 320 followups of AMPS, there will be so many that cite it even though they disagree with the firewall claims that the total body of papers citing either AMPS or ER=EPR will have a majority thinking that the firewalls don't exist.

The truth isn't decided by polls, however. One may still see that Marolf's sociological claims painting Maldacena and Susskind as two lonely fenceposts is a complete fabrication – especially if you read this fabrication from someone who implicitly says that his "argument that firewalls exist" is understood and agreed with by almost everyone.

It always bothers me: Couldn't Erik Verlinde see that gravity as the entropic force has some lethal bugs he may have been (and we may have been) unaware of that kills the whole picture? Isn't Joe Polchinski, an extremely smart physicist, able to see that similar problems with their AMPS proof (and loopholes) have been found as well – along with some new, much more positive and specific insights – that make the beef of the firewall paper pretty much evaporate? And that make them ignore some nice developments just because they don't agree with some disproved faith of theirs?

Of course, there are more brutal examples of this. Couldn't Gerard 't Hooft, a more than well-deserved Nobel prize winner, have seen that his claims about hydrodynamical models behind quantum mechanics have turned to a complete self-evident failure after those 20 years? Why do all these men keep on defending something that has become completely indefensible for so many years? Have they really lost the ability to think rationally, or are they afraid to admit that they were wrong and they know that to defend an arbitrarily silly proposition will always be OK in the broader public because most people don't have a clue, anyway?

In the late 1990s, when 't Hooft began with his hydrodynamic things that could have been shown to be wrong, he was ignored, despite his immense aura. But I am worried that similar, demonstrably wrong "research directions" are eating an increasing portion of the researchers, that a majority of the body of researchers is losing their competence. Are we entering a period in which people will defend AMPS-like paradoxes and entropic gravities for centuries even though it should take at most minutes for a competent physicist to understand why they're not right? Are we returning to the Middle Ages?

by Luboš Motl (noreply@blogger.com) at April 26, 2015 05:38 AM

April 25, 2015

Christian P. Robert - xi'an's og

the forever war [book review]

Another book I bought somewhat on a whim, although I cannot remember which one… The latest edition has a preface by John Scalzi, author of Old Man’s War and its sequels, where he acknowledged he would not have written this series, had he previously read The Forever War. Which strikes me as ironical as I found Scalzi’s novels way better. Deeper. And obviously not getting obsolete so immediately! (As an aside, Scalzi is returning to the Old Man’s War universe with a new novel, The End of All Things.)

“…it’s easy to compute your chances of being able to fight it out for ten years. It comes to about two one-thousandths of one percent. Or, to put it another way, get an old-fashioned six-shooter and play Russian Roulette with four of the six chambers loaded. If you can do it ten times in a row without decorating the opposite wall, congratulations! You’re a civilian.”

This may be the main issue with The Forever War. The fact that it sounds so antiquated. And hence makes reading the novel like an exercise in Creative Writing 101, in order to spot how the author was so rooted in the 1970’s that he could not project far enough in the future to make his novel sustainable. The main issue in the suspension of belief required to proceed through the book is the low-tech configuration of Halderman’s future. Even though intergalactic travel is possible via the traditional portals found in almost every sci’-fi’ book, computers are blatantly missing from the picture. And so is artificial intelligence as well. (2001 A space odyssey was made in 1968, right?!) The economics of a forever warring Earth are quite vague and unconvincing. There is no clever tactics in the war against the Taurans. Even the battle scenes are far from exciting. Esp. the parts where they fight with swords and arrows. And the treatment of sexuality has not aged well. So all that remains in favour of the story (and presumably made the success of the book) is the description of the ground soldier’s life which could almost transcribe verbatim to another war and another era. End of the story. (Unsurprisingly, while being the first book picked for the SF MasterworksThe Forever War did not make it into the 2011 series…)


Filed under: Books, Kids Tagged: Joe Haldeman, John Scalzi, science fiction, space opera, The End of All Things, Vietnam

by xi'an at April 25, 2015 10:15 PM

Peter Coles - In the Dark

A Happy Hubble Coincidence

image

Preoccupied with getting ready for my talk in Bath  I forgot t post an item pointing out that yesterday was the 25th anniversary of the launch of the Hubble Space Telescope. Can it really be so long?

Anyway, many happy returns to Hubble. I did manage to preempt the celebrations however by choosing the above picture of the Hubble Ultra Deep Field as the background fo the poster advertising the talk.

Anyway it went reasonably well. There was a full house and questions went on quite a while. Thanks to Bath Royal Literary and Scientific Institution for the invitation!


by telescoper at April 25, 2015 06:23 PM

Marco Frasca - The Gauge Connection

NASA and warp drive: An update

ResearchBlogging.org

There is some excitement in the net about some news of Harold White’s experiment at NASA. I have uncovered it by chance at a forum. This is a well-frequented site with people at NASA posting on it and regularly updating about the work that they are carrying out. You can also have noticed some activity in the Wikipedia’s pages about it (see here at the section on EmDrive and here). Wikipedia’s section on EmDrive explains in a few lines what is going on. Running a laser inside the RF cavity of the device they observed an unusual effect. They do not know yet if this could be better explained by more mundane reasons like air heating inside the cavity itself. They will repeat the measurements in a vacuum chamber to exclude such a possibility. I present here some of the slides used by White to recount about this

NASA White ExperimentNASA White experimentNASA White experimentThis is the current take by Dr. White as reported by one of his colleagues too prone to leak on nasaspaceflight forum:

 …to be more careful in declaring we’ve observed the first lab based space-time warp signal and rather say we have observed another non-negative results in regards to the current still in-air WFI tests, even though they are the best signals we’ve seen to date. It appears that whenever we talk about warp-drives in our work in a positive way, the general populace and the press reads way too much into our technical disclosures and progress.

I would like to remember that White is not using exotic matter at all. Rather, he is working with strong RF fields to try to develop a warp bubble. This was stated here even if implicitly. Finally, an EmDrive device has been properly described here. Using strong external fields to modify locally a space-time has been described here.

If this will be confirmed in the next few months, it will represent a major breakthrough in experimental general relativity since Eddington  confirmed the bending of light near the sun. Applications would follow if this idea will appear scalable but it will be a shocking result anyway. We look forward to hear from White very soon.

Marco Frasca (2005). Strong coupling expansion for general relativity Int.J.Mod.Phys.D15:1373-1386,2006 arXiv: hep-th/0508246v3


Filed under: Astronautics, General Relativity, Mathematical Physics, News, Physics, Rumors Tagged: Alcubierre drive, General relativity, Harold White, NASA, Warp drive

by mfrasca at April 25, 2015 03:53 PM

Tommaso Dorigo - Scientificblogging

Pictures Of March 20th Eclipse From Svalbard
I am presently in Athens for a few days, to give a seminar and meet the local group of CMS physicists. So I took the chance to visit yesterday evening the Astrophysics department of the University of Athens, where at the top floor is housed a nice 40cm Cassegrain telescope (see picture below). There I joined a small crowd which professor Kosmas Gazeas entertained with views of Jupiter, the Moon, Venus, and a few other celestial targets. I need to thank my friend Nadia, a fellow physicist and amateur astronomer, for inviting us to the event.


read more

by Tommaso Dorigo at April 25, 2015 10:21 AM

April 24, 2015

Emily Lakdawalla - The Planetary Society Blog

A few gems from the latest Cassini image data release
I checked out the latest public image release from Cassini and found an awesome panorama across Saturn's rings, as well as some pretty views looking over Titan's north pole.

April 24, 2015 11:30 PM

Emily Lakdawalla - The Planetary Society Blog

New Horizons One Earth Message
The One Earth Message Project is going to send a message to the stars, and we invite members of the Planetary Society to join us in this historic endeavor.

April 24, 2015 07:08 PM

Clifford V. Johnson - Asymptotia

The Works…
You know what? I'm going to throw this into the works and see what happens... spanner -cvj Click to continue reading this post

by Clifford at April 24, 2015 05:46 PM

Emily Lakdawalla - The Planetary Society Blog

LightSail Readiness Tests Prepare Team for Mission Operations
The LightSail team continues to prepare for the spacecraft's May test flight with a series of readiness simulations that mimic on-orbit operations.

April 24, 2015 05:36 PM

CERN Bulletin

CERN Bulletin

Permanences GAG-EPA
Le GAC organise chaque mois des permanences avec entretiens individuels. La prochaine permanence se tiendra le : Mardi 5 mai de 13 h 30 à 16 h 00 Salle de réunion de l’Association du personnel Les permanences suivantes auront lieu les mardis 2 juin, 1er septembre, 6 octobre, 3 novembre et 1er décembre 2015. Les permanences du Groupement des Anciens sont ouvertes aux bénéficiaires de la Caisse de pensions (y compris les conjoints survivants) et à tous ceux qui approchent de la retraite. Nous invitons vivement ces derniers à s’associer à notre groupement en se procurant, auprès de l’Association du personnel, les documents nécessaires.

by GAC-EPA at April 24, 2015 03:02 PM

CERN Bulletin

Deep Red (Profondo Rosso)
Wednesday 29 April 2015 at 20:00 CERN Council Chamber    Deep Red (Profondo Rosso) Directed by Dario Argento (Italy, 1975) 126 minutes A psychic who can read minds picks up the thoughts of a murderer in the audience and soon becomes a victim. An English pianist gets involved in solving the murders, but finds many of his avenues of inquiry cut off by new murders, and he begins to wonder how the murderer can track his movements so closely. Original version Italian; English subtitles

by Cine Club at April 24, 2015 02:59 PM

CERN Bulletin

COURSE ORIENTATION
Les coureurs d’orientation de la région se sont donné rendez-vous samedi dernier dans les bois de Pougny/Challex lors de l’épreuve organisée par le club d’orientation du CERN. La carte proposée pour les 5 circuits offrait aussi bien un coté très technique avec un relief pentu qu’un coté avec de grandes zones plates à forêt claire. Le parcours technique long comportant 20 postes a été remporté par Beat Muller du COLJ Lausanne en 56:26 devançant Denis Komarov, CO CERN en 57:30 et Yvan Balliot, ASO Annecy en 57:46. Pour les autres circuits les résultats sont les suivants: Technique moyen (13 postes): 1er Joël Mathieu en 52:32 à une seconde du 2e Vladimir Kuznetsov, COLJ Lausanne-Jorat, 3e Jean-Bernard Zosso, CO CERN, en 54:01 Technique court (12 postes): 1er Lennart Jirden, CO CERN en 47:38, 2e Léo Fragnol, CO CERN en 52:05, 3e Valentina Venturi en 1:02:19 Facile Moyen (13 postes): 1er Cédric Blaser, CO CERN en 44:33, 2e Rolf Wipfly en 59:07, 3e Valérie Vittet, CO CERN en 59:12 Facile court (9 postes): 1ere Manon Rousselot, Balise 25 Besançon en 26:48, 2e Clément Genot, en 27:47, 3e Manon Genot en 29:06 La prochaine course comptant pour la coupe de printemps sera organisée samedi 25 avril dans la forêt de Chancy/Valleiry. Les inscriptions se feront sur place et les départs seront donnés entre 13h et 15h (détails sur le site du club http://cern.ch/club-orientation) Les enfants âgés de 7 à 12 ans sont les bienvenus à l’école du club le mercredi après-midi de 14h30 à 16h30. Ils découvriront comment orienter la carte, identifier les éléments sur cette carte, utiliser la boussole, etc. Les détails pour participer à ces séances sont mentionnés sur le site du club.

by Club d'orientation du CERN at April 24, 2015 02:53 PM

CERN Bulletin

soirées Rock - Cuban salsa, bachata and Kizomba workshops
The CERN Dancing Club organizes a Cuban salsa, bachata and Kizomba workshop open to everyone on Saturday 9 May in its B566 ballroom (See the poster). Furthermore, the club invites you to its Rock party. Club parties are free and open to everyone. You bring something to eat and the club offers the drinks (non alcoholic)

by CERN Dancing Club at April 24, 2015 02:43 PM

arXiv blog

Security Experts Hack Teleoperated Surgical Robot

The first hijacking of a medical telerobot raises important questions over the security of remote surgery, say computer security experts.


A crucial bottleneck that prevents life-saving surgery being performed in many parts of the world is the lack of trained surgeons. One way to get around this is to make better use of the ones that are available.

April 24, 2015 02:32 PM

Tommaso Dorigo - Scientificblogging

A Paper on Leptonic CP violation
I received from Ravi Kuchimanchi, the author of a paper to be published in Phys. Rev. D, the following summary, and am happy to share it here. The paper is available in the arxiv.
Are laws of nature left-right symmetric? 


read more

by Tommaso Dorigo at April 24, 2015 07:00 AM

The n-Category Cafe

A synthetic approach to higher equalities

At last, I have a complete draft of my chapter for Elaine Landry’s book Categories for the working philosopher. It’s currently titled

  • Homotopy Type Theory: A synthetic approach to higher equalities. pdf

As you can see (if you read it), not much is left of the one fragment of draft that I posted earlier; I decided to spend the available space on HoTT itself rather than detour into synthetic mathematics more generally. Although the conversations arising from that draft were still helpful, and my other recent ramblings did make it in.

Comments, questions, and suggestions would be very much appreciated! It’s due this Sunday (I got an extension from the original deadline), so there’s a very short window of time to make changes before I have to submit it. I expect I’ll be able to revise it again later in the process, though.

by shulman (viritrilbia@gmail.com) at April 24, 2015 04:35 AM

April 23, 2015

Emily Lakdawalla - The Planetary Society Blog

Can nuclear waste help humanity reach for the stars?
With the shortage of plutonium-238 to power space missions, Europe has decided to focus on an accessible alternative material that could power future spacecraft: americium-241.

April 23, 2015 09:07 PM

John Baez - Azimuth

Categories in Control
To understand ecosystems, ultimately will be to understand networks. – B. C. Patten and M. Witkamp

A while back I decided one way to apply my math skills to help save the planet was to start pushing toward green mathematics: a kind of mathematics that can interact with biology and ecology just as fruitfully as traditional mathematics interacts with physics. As usual with math, the payoffs will come slowly, but they may be large. It’s not a substitute for doing other, more urgent things—but if mathematicians don’t do this, who will?

As a first step in this direction, I decided to study networks.

This May, a small group of mathematicians is meeting in Turin for a workshop on the categorical foundations of network theory, organized by Jacob Biamonte. I’m trying to get us mentally prepared for this. We all have different ideas, yet they should fit together somehow.

Tobias Fritz, Eugene Lerman and David Spivak have all written articles here about their work, though I suspect Eugene will have a lot of completely new things to say, too. Now it’s time for me to say what my students and I have doing.

Despite my ultimate aim of studying biological and ecological networks, I decided to start by clarifying the math of networks that appear in chemistry and engineering, since these are simpler, better understood, useful in their own right, and probably a good warmup for the grander goal. I’ve been working with Brendan Fong on electrical ciruits, and with Jason Erbele on control theory. Let me talk about this paper:

• John Baez and Jason Erbele, Categories in control.

Control theory is the branch of engineering that focuses on manipulating open systems—systems with inputs and outputs—to achieve desired goals. In control theory, signal-flow diagrams are used to describe linear ways of manipulating signals, for example smooth real-valued functions of time. Here’s a real-world example; click the picture for more details:

For a category theorist, at least, it is natural to treat signal-flow diagrams as string diagrams in a symmetric monoidal category. This forces some small changes of perspective, which I’ll explain, but more important is the question: which symmetric monoidal category?

We argue that the answer is: the category \mathrm{FinRel}_k of finite-dimensional vector spaces over a certain field k, but with linear relations rather than linear maps as morphisms, and direct sum rather than tensor product providing the symmetric monoidal structure. We use the field k = \mathbb{R}(s) consisting of rational functions in one real variable s. This variable has the meaning of differentation. A linear relation from k^m to k^n is thus a system of linear constant-coefficient ordinary differential equations relating m ‘input’ signals and n ‘output’ signals.

Our main goal in this paper is to provide a complete ‘generators and relations’ picture of this symmetric monoidal category, with the generators being familiar components of signal-flow diagrams. It turns out that the answer has an intriguing but mysterious connection to ideas that are familiar in the diagrammatic approach to quantum theory! Quantum theory also involves linear algebra, but it uses linear maps between Hilbert spaces as morphisms, and the tensor product of Hilbert spaces provides the symmetric monoidal structure.

We hope that the category-theoretic viewpoint on signal-flow diagrams will shed new light on control theory. However, in this paper we only lay the groundwork.

Signal flow diagrams

There are several basic operations that one wants to perform when manipulating signals. The simplest is multiplying a signal by a scalar. A signal can be amplified by a constant factor:

f \mapsto cf

where c \in \mathbb{R}. We can write this as a string diagram:

Here the labels f and c f on top and bottom are just for explanatory purposes and not really part of the diagram. Control theorists often draw arrows on the wires, but this is unnecessary from the string diagram perspective. Arrows on wires are useful to distinguish objects from their
duals, but ultimately we will obtain a compact closed category where each object is its own dual, so the arrows can be dropped. What we really need is for the box denoting scalar multiplication to have a clearly defined input and output. This is why we draw it as a triangle. Control theorists often use a rectangle or circle, using arrows on wires to indicate which carries the input f and which the output c f.

A signal can also be integrated with respect to the time variable:

f \mapsto \int f

Mathematicians typically take differentiation as fundamental, but engineers sometimes prefer integration, because it is more robust against small perturbations. In the end it will not matter much here. We can again draw integration as a string diagram:

Since this looks like the diagram for scalar multiplication, it is natural to extend \mathbb{R} to \mathbb{R}(s), the field of rational functions of a variable s which stands for differentiation. Then differentiation becomes a special case of scalar multiplication, namely multiplication by s, and integration becomes multiplication by 1/s. Engineers accomplish the same effect with Laplace transforms, since differentiating a signal $f$ is equivalent to multiplying its Laplace transform

\displaystyle{  (\mathcal{L}f)(s) = \int_0^\infty f(t) e^{-st} \,dt  }

by the variable s. Another option is to use the Fourier transform: differentiating f is equivalent to multiplying its Fourier transform

\displaystyle{   (\mathcal{F}f)(\omega) = \int_{-\infty}^\infty f(t) e^{-i\omega t}\, dt  }

by -i\omega. Of course, the function f needs to be sufficiently well-behaved to justify calculations involving its Laplace or Fourier transform. At a more basic level, it also requires some work to treat integration as the two-sided inverse of differentiation. Engineers do this by considering signals that vanish for t < 0, and choosing the antiderivative that vanishes under the same condition. Luckily all these issues can be side-stepped in a formal treatment of signal-flow diagrams: we can simply treat signals as living in an unspecified vector space over the field \mathbb{R}(s). The field \mathbb{C}(s) would work just as well, and control theory relies heavily on complex analysis. In our paper we work over an arbitrary field k.

The simplest possible signal processor is a rock, which takes the 'input' given by the force F on the rock and produces as 'output' the rock's position q. Thanks to Newton's second law F=ma, we can describe this using a signal-flow diagram:

Here composition of morphisms is drawn in the usual way, by attaching the output wire of one morphism to the input wire of the next.

To build more interesting machines we need more building blocks, such as addition:

+ : (f,g) \mapsto f + g

and duplication:

\Delta :  f \mapsto (f,f)

When these linear maps are written as matrices, their matrices are transposes of each other. This is reflected in the string diagrams for addition and duplication:

The second is essentially an upside-down version of the first. However, we draw addition as a dark triangle and duplication as a light one because we will later want another way to ‘turn addition upside-down’ that does not give duplication. As an added bonus, a light upside-down triangle resembles the Greek letter \Delta, the usual symbol for duplication.

While they are typically not considered worthy of mention in control theory, for completeness we must include two other building blocks. One is the zero map from the zero-dimensional vector space \{0\} to our field k, which we denote as 0 and draw as follows:

The other is the zero map from k to \{0\}, sometimes called ‘deletion’, which we denote as ! and draw thus:

Just as the matrices for addition and duplication are transposes of each other, so are the matrices for zero and deletion, though they are rather degenerate, being 1 \times 0 and 0 \times 1 matrices, respectively. Addition and zero make k into a commutative monoid, meaning that the following relations hold:

The equation at right is the commutative law, and the crossing of strands is the braiding:

B : (f,g) \mapsto (g,f)

by which we switch two signals. In fact this braiding is a symmetry, so it does not matter which strand goes over which:

Dually, duplication and deletion make k into a cocommutative comonoid. This means that if we reflect the equations obeyed by addition and zero across the horizontal axis and turn dark operations into light ones, we obtain another set of valid equations:

There are also relations between the monoid and comonoid operations. For example, adding two signals and then duplicating the result gives the same output as duplicating each signal and then adding the results:

This diagram is familiar in the theory of Hopf algebras, or more generally bialgebras. Here it is an example of the fact that the monoid operations on k are comonoid homomorphisms—or equivalently, the comonoid operations are monoid homomorphisms.

We summarize this situation by saying that k is a bimonoid. These are all the bimonoid laws, drawn as diagrams:


The last equation means we can actually make the diagram at left disappear, since it equals the identity morphism on the 0-dimensional vector space, which is drawn as nothing.

So far all our string diagrams denote linear maps. We can treat these as morphisms in the category \mathrm{FinVect}_k, where objects are finite-dimensional vector spaces over a field k and morphisms are linear maps. This category is equivalent to the category where the only objects are vector spaces k^n for n \ge 0, and then morphisms can be seen as n \times m matrices. The space of signals is a vector space V over k which may not be finite-dimensional, but this does not cause a problem: an n \times m matrix with entries in k still defines a linear map from V^n to V^m in a functorial way.

In applications of string diagrams to quantum theory, we make \mathrm{FinVect}_k into a symmetric monoidal category using the tensor product of vector spaces. In control theory, we instead make \mathrm{FinVect}_k into a symmetric monoidal category using the direct sum of vector spaces. In Lemma 1 of our paper we prove that for any field k, \mathrm{FinVect}_k with direct sum is generated as a symmetric monoidal category by the one object k together with these morphisms:

where c \in k is arbitrary.

However, these generating morphisms obey some unexpected relations! For example, we have:

Thus, it is important to find a complete set of relations obeyed by these generating morphisms, thus obtaining a presentation of \mathrm{FinVect}_k as a symmetric monoidal category. We do this in Theorem 2. In brief, these relations say:

(1) (k, +, 0, \Delta, !) is a bicommutative bimonoid;

(2) the rig operations of k can be recovered from the generating morphisms;

(3) all the generating morphisms commute with scalar multiplication.

Here item (2) means that +, \cdot, 0 and 1 in the field k can be expressed in terms of signal-flow diagrams as follows:

Multiplicative inverses cannot be so expressed, so our signal-flow diagrams so far do not know that k is a field. Additive inverses also cannot be expressed in this way. So, we expect that a version of Theorem 2 will hold whenever k is a mere rig: that is, a ‘ring without negatives’, like the natural numbers. The one change is that instead of working with vector spaces, we should work with finitely presented free k-modules.

Item (3), the fact that all our generating morphisms commute with scalar multiplication, amounts to these diagrammatic equations:

While Theorem 2 is a step towards understanding the category-theoretic underpinnings of control theory, it does not treat signal-flow diagrams that include ‘feedback’. Feedback is one of the most fundamental concepts in control theory because a control system without feedback may be highly sensitive to disturbances or unmodeled behavior. Feedback allows these uncontrolled behaviors to be mollified. As a string diagram, a basic feedback system might look schematically like this:

The user inputs a ‘reference’ signal, which is fed into a controller, whose output is fed into a system, which control theorists call a ‘plant’, which in turn produces its own output. But then the system’s output is duplicated, and one copy is fed into a sensor, whose output is added (or if we prefer, subtracted) from the reference signal.

In string diagrams—unlike in the usual thinking on control theory—it is essential to be able to read any diagram from top to bottom as a composite of tensor products of generating morphisms. Thus, to incorporate the idea of feedback, we need two more generating morphisms. These are the ‘cup':

and ‘cap':

These are not maps: they are relations. The cup imposes the relation that its two inputs be equal, while the cap does the same for its two outputs. This is a way of describing how a signal flows around a bend in a wire.

To make this precise, we use a category called \mathrm{FinRel}_k. An object of this category is a finite-dimensional vector space over k, while a morphism from U to V, denoted L : U \rightharpoonup V, is a linear relation, meaning a linear subspace

L \subseteq U \oplus V

In particular, when k = \mathbb{R}(s), a linear relation L : k^m \to k^n is just an arbitrary system of constant-coefficient linear ordinary differential equations relating m input variables and n output variables.

Since the direct sum U \oplus V is also the cartesian product of U and V, a linear relation is indeed a relation in the usual sense, but with the property that if u \in U is related to v \in V and u' \in U is related to v' \in V then cu + c'u' is related to cv + c'v' whenever c,c' \in k.

We compose linear relations L : U \rightharpoonup V and L' : V \rightharpoonup W as follows:

L'L = \{(u,w) \colon \; \exists\; v \in V \;\; (u,v) \in L \textrm{ and } (v,w) \in L'\}

Any linear map f : U \to V gives a linear relation F : U \rightharpoonup V, namely the graph of that map:

F = \{ (u,f(u)) : u \in U \}

Composing linear maps thus becomes a special case of composing linear relations, so \mathrm{FinVect}_k becomes a subcategory of \mathrm{FinRel}_k. Furthermore, we can make \mathrm{FinRel}_k into a monoidal category using direct sums, and it becomes symmetric monoidal using the braiding already present in \mathrm{FinVect}_k.

In these terms, the cup is the linear relation

\cup : k^2 \rightharpoonup \{0\}

given by

\cup \; = \; \{ (x,x,0) : x \in k   \} \; \subseteq \; k^2 \oplus \{0\}

while the cap is the linear relation

\cap : \{0\} \rightharpoonup k^2

given by

\cap \; = \; \{ (0,x,x) : x \in k   \} \; \subseteq \; \{0\} \oplus k^2

These obey the zigzag relations:

Thus, they make \mathrm{FinRel}_k into a compact closed category where k, and thus every object, is its own dual.

Besides feedback, one of the things that make the cap and cup useful is that they allow any morphism L : U \rightharpoonup V to be ‘plugged in backwards’ and thus ‘turned around’. For instance, turning around integration:

we obtain differentiation. In general, using caps and cups we can turn around any linear relation L : U \rightharpoonup V and obtain a linear relation L^\dagger : V \rightharpoonup U, called the adjoint of L, which turns out to given by

L^\dagger = \{(v,u) : (u,v) \in L \}

For example, if c \in k is nonzero, the adjoint of scalar multiplication by c is multiplication by c^{-1}:

Thus, caps and cups allow us to express multiplicative inverses in terms of signal-flow diagrams! One might think that a problem arises when when c = 0, but no: the adjoint of scalar multiplication by 0 is

\{(0,x) : x \in k \} \subseteq k \oplus k

In Lemma 3 we show that \mathrm{FinRel}_k is generated, as a symmetric monoidal category, by these morphisms:

where c \in k is arbitrary.

In Theorem 4 we find a complete set of relations obeyed by these generating morphisms,thus giving a presentation of \mathrm{FinRel}_k as a symmetric monoidal category. To describe these relations, it is useful to work with adjoints of the generating morphisms. We have already seen that the adjoint of scalar multiplication by c is scalar multiplication by c^{-1}, except when c = 0. Taking adjoints of the other four generating morphisms of \mathrm{FinVect}_k, we obtain four important but perhaps unfamiliar linear relations. We draw these as ‘turned around’ versions of the original generating morphisms:

Coaddition is a linear relation from k to k^2 that holds when the two outputs sum to the input:

+^\dagger : k \rightharpoonup k^2

+^\dagger = \{(x,y,z)  : \; x = y + z \} \subseteq k \oplus k^2

Cozero is a linear relation from k to \{0\} that holds when the input is zero:

0^\dagger : k \rightharpoonup \{0\}

0^\dagger = \{ (0,0)\} \subseteq k \oplus \{0\}

Coduplication is a linear relation from k^2 to k that holds when the two inputs both equal the output:

\Delta^\dagger : k^2 \rightharpoonup k

\Delta^\dagger = \{(x,y,z)  : \; x = y = z \} \subseteq k^2 \oplus k

Codeletion is a linear relation from \{0\} to k that holds always:

!^\dagger : \{0\} \rightharpoonup k

!^\dagger = \{(0,x) \} \subseteq \{0\} \oplus k

Since +^\dagger,0^\dagger,\Delta^\dagger and !^\dagger automatically obey turned-around versions of the relations obeyed by +,0,\Delta and !, we see that k acquires a second bicommutative bimonoid structure when considered as an object in \mathrm{FinRel}_k.

Moreover, the four dark operations make k into a Frobenius monoid. This means that (k,+,0) is a monoid, (k,+^\dagger, 0^\dagger) is a comonoid, and the Frobenius relation holds:

All three expressions in this equation are linear relations saying that the sum of the two inputs equal the sum of the two outputs.

The operation sending each linear relation to its adjoint extends to a contravariant functor

\dagger : \mathrm{FinRel}_k\ \to \mathrm{FinRel}_k ,

which obeys a list of properties that are summarized by saying that \mathrm{FinRel}_k is a †-compact category. Because two of the operations in the Frobenius monoid (k, +,0,+^\dagger,0^\dagger) are adjoints of the other two, it is a †-Frobenius monoid.

This Frobenius monoid is also special, meaning that
comultiplication (in this case +^\dagger) followed by multiplication (in this case +) equals the identity:

This Frobenius monoid is also commutative—and cocommutative, but for Frobenius monoids this follows from commutativity.

Starting around 2008, commutative special †-Frobenius monoids have become important in the categorical foundations of quantum theory, where they can be understood as ‘classical structures’ for quantum systems. The category \mathrm{FinHilb} of finite-dimensional Hilbert spaces and linear maps is a †-compact category, where any linear map f : H \to K has an adjoint f^\dagger : K \to H given by

\langle f^\dagger \phi, \psi \rangle = \langle \phi, f \psi \rangle

for all \psi \in H, \phi \in K . A commutative special †-Frobenius monoid in \mathrm{FinHilb} is then the same as a Hilbert space with a chosen orthonormal basis. The reason is that given an orthonormal basis \psi_i for a finite-dimensional Hilbert space H, we can make H into a commutative special †-Frobenius monoid with multiplication m : H \otimes H \to H given by

m (\psi_i \otimes \psi_j ) = \left\{ \begin{array}{cl}  \psi_i & i = j \\                                                                 0 & i \ne j  \end{array}\right.

and unit i : \mathbb{C} \to H given by

i(1) = \sum_i \psi_i

The comultiplication m^\dagger duplicates basis states:

m^\dagger(\psi_i) = \psi_i \otimes \psi_i

Conversely, any commutative special †-Frobenius monoid in \mathrm{FinHilb} arises this way.

Considerably earlier, around 1995, commutative Frobenius monoids were recognized as important in topological quantum field theory. The reason, ultimately, is that the free symmetric monoidal category on a commutative Frobenius monoid is 2\mathrm{Cob}, the category with 2-dimensional oriented cobordisms as morphisms. But the free symmetric monoidal category on a commutative special Frobenius monoid was worked out even earlier: it is the category with finite sets as objects, where a morphism f : X \to Y is an isomorphism class of cospans

X \longrightarrow S \longleftarrow Y

This category can be made into a †-compact category in an obvious way, and then the 1-element set becomes a commutative special †-Frobenius monoid.

For all these reasons, it is interesting to find a commutative special †-Frobenius monoid lurking at the heart of control theory! However, the Frobenius monoid here has yet another property, which is more unusual. Namely, the unit 0 : \{0\} \rightharpoonup k followed by the counit 0^\dagger : k \rightharpoonup \{0\} is the identity:

We call a special Frobenius monoid that also obeys this extra law extra-special. One can check that the free symmetric monoidal category on a commutative extra-special Frobenius monoid is the category with finite sets as objects, where a morphism f : X \to Y is an equivalence relation on the disjoint union X \sqcup Y, and we compose f : X \to Y and g : Y \to Z by letting f and g generate an equivalence relation on X \sqcup Y \sqcup Z and then restricting this to X \sqcup Z.

As if this were not enough, the light operations share many properties with the dark ones. In particular, these operations make k into a commutative extra-special †-Frobenius monoid in a second way. In summary:

(k, +, 0, \Delta, !) is a bicommutative bimonoid;

(k, \Delta^\dagger, !^\dagger, +^\dagger, 0^\dagger) is a bicommutative bimonoid;

(k, +, 0, +^\dagger, 0^\dagger) is a commutative extra-special †-Frobenius monoid;

(k, \Delta^\dagger, !^\dagger, \Delta, !) is a commutative extra-special †-Frobenius monoid.

It should be no surprise that with all these structures built in, signal-flow diagrams are a powerful method of designing processes.

However, it is surprising that most of these structures are present in a seemingly very different context: the so-called ZX calculus, a diagrammatic formalism for working with complementary observables in quantum theory. This arises naturally when one has an n-dimensional Hilbert space $H$ with two orthonormal bases \psi_i, \phi_i that are mutually unbiased, meaning that

|\langle \psi_i, \phi_j \rangle|^2 = \displaystyle{\frac{1}{n}}

for all 1 \le i, j \le n. Each orthonormal basis makes H into commutative special †-Frobenius monoid in \mathrm{FinHilb}. Moreover, the multiplication and unit of either one of these Frobenius monoids fits together with the comultiplication and counit of the other to form a bicommutative bimonoid. So, we have all the structure present in the list above—except that these Frobenius monoids are only extra-special if H is 1-dimensional.

The field k is also a 1-dimensional vector space, but this is a red herring: in \mathrm{FinRel}_k every finite-dimensional vector space naturally acquires all four structures listed above, since addition, zero, duplication and deletion are well-defined and obey all the relations we have discussed. Jason and I focus on k in our paper simply because it generates all the objects \mathrm{FinRel}_k via direct sum.

Finally, in \mathrm{FinRel}_k the cap and cup are related to the light and dark operations as follows:

Note the curious factor of -1 in the second equation, which breaks some of the symmetry we have seen so far. This equation says that two elements x, y \in k sum to zero if and only if -x = y. Using the zigzag relations, the two equations above give

We thus see that in \mathrm{FinRel}_k, both additive and multiplicative inverses can be expressed in terms of the generating morphisms used in signal-flow diagrams.

Theorem 4 of our paper gives a presentation of \mathrm{FinRel}_k based on the ideas just discussed. Briefly, it says that \mathrm{FinRel}_k is equivalent to the symmetric monoidal category generated by an object k and these morphisms:

• addition +: k^2 \rightharpoonup k
• zero 0 : \{0\} \rightharpoonup k
• duplication \Delta: k\rightharpoonup k^2
• deletion ! : k \rightharpoonup 0
• scalar multiplication c: k\rightharpoonup k for any c\in k
• cup \cup : k^2 \rightharpoonup \{0\}
• cap \cap : \{0\} \rightharpoonup k^2

obeying these relations:

(1) (k, +, 0, \Delta, !) is a bicommutative bimonoid;

(2) \cap and \cup obey the zigzag equations;

(3) (k, +, 0, +^\dagger, 0^\dagger) is a commutative extra-special †-Frobenius monoid;

(4) (k, \Delta^\dagger, !^\dagger, \Delta, !) is a commutative extra-special †-Frobenius monoid;

(5) the field operations of k can be recovered from the generating morphisms;

(6) the generating morphisms (1)-(4) commute with scalar multiplication.

Note that item (2) makes \mathrm{FinRel}_k into a †-compact category, allowing us to mention the adjoints of generating morphisms in the subsequent relations. Item (5) means that +, \cdot, 0, 1 and also additive and multiplicative inverses in the field k can be expressed in terms of signal-flow diagrams in the manner we have explained.

So, we have a good categorical understanding of the linear algebra used in signal flow diagrams!

Now Jason is moving ahead to apply this to some interesting problems… but that’s another story, for later.


by John Baez at April 23, 2015 07:11 PM

arXiv blog

The Algorithm Set Revolutionize 3-D Protein Structure Discovery

A new way to determine 3-D structures from 2-D images is set to speed up protein structure discovery by a factor of 100,000.

April 23, 2015 06:58 PM

arXiv blog

An Algorithm Set To Revolutionize 3-D Protein Structure Discovery

A new way to determine 3-D structures from 2-D images is set to speed up protein structure discovery by a factor of 100,000.


One of the great challenges in molecular biology is to determine the three-dimensional structure of large biomolecules such as proteins. But this is a famously difficult and time-consuming task.

April 23, 2015 06:58 PM

arXiv blog

An Algorithm Set Revolutionizes 3-D Protein Structure Discovery

A new way to determine 3-D structures from 2-D images is set to speed up protein structure discovery by a factor of 100,000.


One of the great challenges in molecular biology is to determine the three-dimensional structure of large biomolecules such as proteins. But this is a famously difficult and time-consuming task.

April 23, 2015 06:58 PM

ZapperZ - Physics and Physicists

How Big Is The Sun?
Hey, you get to use some of your high-school geometry and trig to make sense of this video!



Zz.

by ZapperZ (noreply@blogger.com) at April 23, 2015 06:48 PM

Symmetrybreaking - Fermilab/SLAC

Extreme cold and shipwreck lead

Scientists have proven the concept of the CUORE experiment, which will study neutrinos with the world’s coldest detector and ancient lead.

Scientists on an experiment in Italy are looking for a process so rare, it is thought to occur less than once every trillion, trillion years. To find it, they will create the single coldest cubic meter in the universe.

The experiment, the Cryogenic Underground Observatory for Rare Events, will begin by the end of the year, scientists recently announced after a smaller version demonstrated the feasibility of the design.

The project, based at Gran Sasso National Laboratory, will examine a property of ghostly neutrinos by looking for a process called neutrinoless double beta decay. If scientists find it, it could be a clue as to why there is more matter than antimatter in the universe–and show that neutrinos get their mass in a way that’s different from all other particles.

The full CUORE experiment requires 19 towers of tellurium dioxide crystals, each made of 52 blocks just smaller than a Rubik’s cube. Physicists will place these towers into a refrigerator called a cryostat and cool it to 10 millikelvin, barely above absolute zero. The cryostat will eclipse even the chill of empty space, which registers a toasty 2.7 Kelvin (minus 455 degrees Fahrenheit).

CUORE uses the cold crystals to search for a small change in temperature caused by these rare nuclear decays. Unlike ordinary beta decays, in which electrons and antineutrinos share energy, the neutrinoless double beta decay produces two electrons, but no neutrinos at all. It is as if the two antineutrinos that should have been produced annihilate one another inside the nucleus.

“This would be really cool because it would mean that the neutrino and the antineutrino are the same particle, and most of the time we just can’t tell the difference,” says Lindley Winslow, a professor at MIT and one of over 160 scientists working on CUORE.

Neutrinos could be the only fundamental particles of matter to have this strange property.

For the past two years, scientists collected data on just one of the crystal towers housed in a smaller cryostat, a project called CUORE-0. The most recent result establishes the most sensitive limits for seeing neutrinoless beta decay in tellurium crystals. In addition, the researchers verified that the techniques developed to construct CUORE work well and reduce background radiation prior to the full experiment coming online.

“It’s a great result for Te-130. We are also very excited that we were able to demonstrate that what’s coming online with CUORE is what we hoped it would be,” says Reina Maruyama, professor of physics at Yale University and a member of the CUORE Physics Board. “We look forward to shattering our own result from CUORE-0 once CUORE comes to life”

Avoiding radioactive contamination and shielding the experiment from outside sources that might mimic the telltale energy signature CUORE is searching for is a priority. The mountains at Gran Sasso will provide one layer of shielding from cosmic bombardment, but the CUORE cryostat will also get a second layer of protection against the minor radiation of the mountain itself. Ancient Roman lead ingots, salvaged from a shipwreck that occurred more than 2000 years ago, have been melted down into a shield that will cocoon the crystal towers.

Lead excels at blocking radiation but can itself become slightly radioactive when hit by cosmic rays. The ingots that sat at the bottom of the sea for two millennia have been spared cosmic bombardment and provide very clean, if somewhat exotic, shielding material.

The next step for CUORE will be to finish commissioning the powerful refrigerator, the largest of its kind. The cryostat must remain stable even with the tons of material inside. After the detector is installed and the cryostat cooled, it will likely take between six months and a year to find the ultimate sensitivity, measure contamination (if there is any), and show that the detector works perfectly, says Yury Kolomensky, senior faculty scientist in the Physics Division at Berkeley Lab and the US spokesperson for the CUORE collaboration. Then it will take data for five years.

“And then we hope to come back with either a discovery [of neutrinoless double beta decay]–or not. And if not, that means we have shrunk the size of the haystack by a factor of 20,” Kolomensky says.

If CUORE goes well, it could find itself a contender for the next generation of neutrinoless double beta decay experiments, something Kolomensky says the nuclear physics community plans to decide over the next two to three years. CUORE uses tellurium, a plentiful isotope that has good energy resolution, meaning scientists can tell precisely where the peak is and what caused it. Other large-scale neutrinoless double beta decay experiments use germanium or xenon instead.

“The worldwide community is looking at all the technologies very carefully,” Kolomensky says. “If our detector works as advertised at this scale, we’ll be in a very strong position to build an even better detector.”

CUORE’s journey has already been more than 30 years in the making, according to Oliviero Cremonesi, spokesperson for the collaboration.

“It’s very emotional for me. We started in the ‘80s with milligram prototypes, and now we have a ton-size detector and a unique cryogenic system,” Cremonesi says. “Even more exciting is the knowledge that this adventure could continue in the future.”

Like what you see? Sign up for a free subscription to symmetry!

by Lauren Biron at April 23, 2015 03:39 PM

ZapperZ - Physics and Physicists

Accelerator Development For National Security
So let me point out this news article first before I go off on my rant. This article describes an important application of particle accelerators that has an important application in national security via the generation of high-energy photons. These photons can be used in a number of different ways for national security purposes.

The compact photon source, which is being developed by Berkeley Lab, Lawrence Livermore National Laboratory, and Idaho National Laboratory, is tunable, allowing users to produce MeV photons within very specific narrow ranges of energy, an improvement that will allow the fabrication of highly sensitive yet safe detection instruments to reach where ordinary passive handheld sensors cannot, and to identify nuclear material such as uranium-235 hidden behind thick shielding. "The ability to choose the photon energy is what would allow increased sensitivity and safety. Only the photons that produce the best signal and least noise would be delivered," explains project lead Cameron Geddes, a staff scientist at the Berkeley Lab Laser Accelerator (BELLA) Center.
.
.
.
To make a tunable photon source that is also compact, Geddes and his team will use one of BELLA's laser plasma accelerators (LPAs) instead of a conventional accelerator to produce a high-intensity electron beam. By operating in a plasma, or ionized gas, LPAs can accelerate electrons 10,000 times "harder" or faster than a conventional accelerator. "That means we can achieve the energy that would take tens of meters in a conventional accelerator within a centimeter using our LPA technology," Geddes says.

I've mentioned about this type of advanced accelerator scheme a few times on here, so you can do a search to find out more.

Now, to my rant. I hate the title, first of all. It perpetuates the popular misunderstanding that accelerators means "high energy physics". Notice that the production of light source in this case has no connection to high energy physics field of study, and it isn't for such a purpose. The article did mention that this scheme is also being developed as a possible means to generate future high-energy electrons for particle colliders. That's fine, but this scheme is independent of such a purpose, and as can be seen, can be used as a light source for many different uses outside of high energy physics.

Unfortunately, the confusion is also perpetuated by the way funding for accelerator science is done within the DOE. Even though more accelerators in the US is used as light sources (synchrotron and FEL facilities) than they are for particle colliders, all the funding for accelerator science is still being handled by DOE's Office of Science High Energy Physics Division. DOE's Basic Energy Sciences, which funds synchrotron light sources and SLAC's LCLS, somehow would not consider funding advancement in accelerator science, even though they greatly benefit from this field. NSF, on the other hand, has started to separate out Accelerator Science funding from High Energy Physics funding, even though the separation so far hasn't been clean.

What this means is that, with the funding in HEP in the US taking a dive the past several years, funding in Accelerator Science suffered the same collateral damage, even though Accelerator Science is actually independent of HEP and has vital needs in many areas of physics.

Articles such as this should make it clear that this is not a high energy physics application, and not fall into the trap of associating accelerator science with HEP.

Zz.
The compact photon source, which is being developed by Berkeley Lab, Lawrence Livermore National Laboratory, and Idaho National Laboratory, is tunable, allowing users to produce MeV photons within very specific narrow ranges of energy, an improvement that will allow the fabrication of highly sensitive yet safe detection instruments to reach where ordinary passive handheld sensors cannot, and to identify such as uranium-235 hidden behind thick shielding. "The ability to choose the photon energy is what would allow increased sensitivity and safety. Only the photons that produce the best signal and least noise would be delivered," explains project lead Cameron Geddes, a staff scientist at the Berkeley Lab Laser Accelerator (BELLA) Center.

Read more at: http://phys.org/news/2015-04-national-high-energy-physics.html#jCp
The compact photon source, which is being developed by Berkeley Lab, Lawrence Livermore National Laboratory, and Idaho National Laboratory, is tunable, allowing users to produce MeV photons within very specific narrow ranges of energy, an improvement that will allow the fabrication of highly sensitive yet safe detection instruments to reach where ordinary passive handheld sensors cannot, and to identify such as uranium-235 hidden behind thick shielding. "The ability to choose the photon energy is what would allow increased sensitivity and safety. Only the photons that produce the best signal and least noise would be delivered," explains project lead Cameron Geddes, a staff scientist at the Berkeley Lab Laser Accelerator (BELLA) Center.

Read more at: http://phys.org/news/2015-04-national-high-energy-physics.html#jCp

by ZapperZ (noreply@blogger.com) at April 23, 2015 03:37 PM

Tommaso Dorigo - Scientificblogging

Searching For The A Boson
Besides being a giant triumph of theoretical physics and the definitive seal on the correctness of the Standard Model -at least at the energies at which we are capable of investigating particle physics nowadays-, the 2012 discovery of the Higgs boson by the CMS and ATLAS collaborations opens the way to new searches of new physics.

The Higgs boson is one more particle we know how to identify now, so we can now focus on new exotic phenomena that might produce Higgs bosons in the final state, and entertain ourselves in their search.

read more

by Tommaso Dorigo at April 23, 2015 08:02 AM

April 22, 2015

astrobites - astro-ph reader's digest

“Your heart sounds just fine, PSO J334.2028+01.4075″

If we read the light curve (its brightness versus time) of a quasar (that is, a supermassive black hole with a jet) like an electrocardiogram, we’d conclude that lots of quasars are having heart attacks. Their signals vary in brightness randomly, like the beats of an arrhythmic heart. The randomness of their emission may be related to their central supermassive black holes, which are surrounded by blobs of gas, dust, and stars, accreting onto the black hole at irregular intervals. The astronomers of today’s paper, however, found a quasar with a regular heartbeat. Quasar PSO J334.2028+01.4075 has a very healthy heart rate of 6.7 beats per decade, or once every 542 days. One explanation is that this guy hosts a pair of supermassive black holes. If true, then the astonishing interpretation of this quasar’s heart rate is that its black holes are only a few orbits away from merging! How did they catch their patient at such a critical stage?

"<strong

Figure 1. Surface brightness of an accretion disk surrounding a supermassive black hole binary from a simulation by Farris et al. 2014. Notice the variations induced by the black hole orbits.

In fact, Liu et al. didn’t just stumble upon this quasar. They went looking for it. Here was their train of reasoning: Deep surveys reveal lots of galaxies merging long ago. After a galactic merger, the two central black holes (every big galaxy has one) will migrate to the new galaxy’s center, begin to orbit closely, and drive periodic accretion. Recent simulations reveal that a quasar periodically accreting like this could be visible as a sinusoidally-varying light curve. You can see waves of matter accreting onto the binary black hole in one such simulation in Fig. 1.

If the binary is very near the end of its life, when gravitational radiation begins to drive its rate of inspiral, its orbital period will be of the order of years (that is, for billion-solar-mass black holes—a binary’s inspiral timescale increases linearly with its total mass). In fact, several candidate supermassive black hole binaries have already been identified, for example as two bright spots in the center of a quasar. But these black holes are separated by thousands of parsecs, perhaps not even gravitationally bound to one another. Liu et al. were searching for a pair of supermassive black holes headed doggedly toward merger, not flirting at thousands of parsecs. So they rolled up their sleeves and dug through a multi-year survey of a small patch of sky, looking for a quasar with a light curve rising and falling once every few years.

Liu et al. belong to the Pan-STARRS collaboration, involved in rapid optical surveys of much of the sky since 2010, looking for things that flicker and blink. To fulfill one of their projects, the Medium Deep Survey, they observed ten patches of sky daily in five different colors. If you’re looking for quasars with light curves that oscillate on a timescale of years, this is the perfect data set to chew on.

"<strong

Figure 2. Color-color diagram of all the point sources in the field of view. A relative color-magnitude scale is given on each axis, defined for any given point by subtracting its brightness in one color filter from its brightness in some other color filter (more negative means brighter). The horizontal-axis is “ultraviolet-ness”, and the vertical-axis is “green-ness”. Thus objects with relatively more light at high energies (like quasars) are clustered in the bottom left.

The color data were helpful in gleaning a subset of potential quasars from all the bright dots in their field of view. The astronomers particularly wanted to avoid misidentifying a variable star (like an RR Lyrae) as a periodically-varying quasar. At optical wavelengths, quasars actually look a lot like stars, which is how they earned their name. But they are relatively brighter at shorter wavelengths than stars are. Liu et al. used a color-color filter, represented in Fig. 2, to select 316 candidate quasars.

Then they identified a subset of 168 quasars with large variations in brightness over the four years of data. Since they were on the hunt for periodic variations, they ran a Fourier analysis separately on each color channel of these light curves, looking for sinusoidal variations. They found 40 quasars with regular heartbeats visible in two or more color channels. Because the baseline of their observation was only four years, their search was only sensitive to periods less than that. In this paper, they present their most significant detection, PSO J334.2028+01.4075. Lightcurves for this quasar in each of four color channels are shown in Fig. 3, folded over the 542 day period.

"<strong

Figure 3. The brightness measurements of this quasar in each of the four observation bands. The best-fit sinusoid is overlayed as a dashed line.

Assuming the virial relationship between quasar spectral properties and black hole mass hold in this weird case with two black holes at the center, Liu et al. inferred that the total mass of the black holes is 10 billion solar masses. Then, identifying the period of the lightcurve with the orbital period of the binary, they calculated a black hole separation of 3-13 milliparsecs, or just 10 or so widths of the central black holes! If their interpretation of the light curve is correct, these central black holes are well on their way to merger, with only a handful of orbits left before that cataclysmic event.

The authors make sure to point out that there are other explanations. A single black hole could present a precessing jet, a little like a pulsar, which wobbles more or less in and out of view. But jet precessions usually occur on timescales of hundreds to millions of years. They also highlight a discovery announced in Nature only months before their own, of a similar periodically-varying quasar, with a period of five years. That supermassive black hole binary candidate, however, is lower-mass and nowhere near merger yet. That group also proposes alternative explanations, including hotspots in the accretion disk, or a precessing disk, but none are as simple as the supermassive binary black hole explanation.

Liu et al’s diagnosis of PSO J334.2028+01.4075 will be proven true or false within the decade. Alternatively, a more nuanced understanding of accretion onto binary black holes may contribute to a modified diagnosis. (“I’m terribly sorry about this, but we seem to have made an error. It appears you’ve got another billion years, PSO J334.2028+01.4075.”) But even at this early stage of discovery, Liu et al. have introduced an exciting new technique into time-domain extragalactic astronomy. Similar searches will easily be performed on larger surveys like those of the Large Synoptic Survey Telescope, uncovering thousands more periodic quasars, any one of them a potential host to a supermassive black hole binary on the verge of merger.

by Brett Deaton at April 22, 2015 09:30 PM

ZapperZ - Physics and Physicists

Quantum Entanglement For Dummies
Over the years, I've given many references and resources on quantum entanglement on this blog (check here for one of the more comprehensive references). Now, obviously, many of these sources are highly sophisticated and not really meant for the general public. It is also true that I continue to get and to see question on quantum entanglement from the public. Worse still, the Deepak Chopras of the world, who clearly do not understand the physics involved, are bastardizing this phenomenon in ridiculous fashion. But the final straw that compelled me to write up this thing is the episode of "Marvel Agent of Shield" from last night where the top brass of HYDRA was trying to explain to Bakshi what "quantum entanglement" is and how Gordon was using it to teleport from one location to another. ABSURD!

So while this is all brought about by a TV series, it is more of a reflection on how so many people are really missing the understanding of this phenomenon. So I intend to explain this is very simple language and using highly-simplified picture to explain what quantum entanglement is. Hopefully, it will diminish some of the false ideas and myth of what it is.

Before I dive into the quantum aspect of it, I want to start with something that is well-known, and something we teach even high school students in basic physics. It is the conservation of momentum. In Figure 1, I am showing a straight-foward example of conservation of linear momentum case, a common problem that we give to intro physics students.


In (a), you have an object with no initial linear momentum. In (b), it spontaneously splits into two different masses, m1 and m2, and go off in opposite directions. In (c), m1 reaches Bob and m2 reaches Alice. Bob measures the momentum of m1 to be p1.. Now, this is crucial. IMMEDIATELY, without even asking Alice, Bob knows unambiguously the momentum of m2 to be p2 simply via the conservation of linear momentum. He knows this instantaneously, meaning the momentum of m2 is unambiguously determined, no matter how far m2 is from Bob. When Alice finally measures the momentum of m2, she will find that it is, indeed, equal to p2.

Yet, in all the years that we learn classical physics, never once do we ever consider that m1 and m2 are "entangled". No mystical and metaphysical essays were ever written about how these two are somehow connected and can "talk" to each other at speeds faster than light.

Now, let's go to the quantum case. Similar scenario, outlined in Figure 2.


Here, we are starting to see something slightly different. We start with an object with no net spin in (a). Then it spontaneously splits into two particles. This is where it will be different than the classical case in Figure 1. Each of the daughter particles has a superposition of two possible spin states: up and down. This is what we call the SUPERPOSITION phenomenon. It was what prompted the infamous Schrodinger Cat thought experiment where the cat is both alive and dead. This is crucial to understand because it means that the state of each of the daughter particle is NOT DETERMINED. Standard QM interpretation says that the particle has no definite spin direction, and that until it is measured, both spin states are there!

Now, when one daughter particle reaches Bob, he then measures it spin. ONLY THEN will the particle be in a particular spin state (i.e. the commonly-described as wavefunction collapsing into a particular value). In my illustration, Bob see that it is in a spin-down state. Immediately, the spin state of other particle at Alice is in the spin-up state to preserve the conservation of spin angular momentum. When Bob measures the pin of his particle, he immediately knows the spin of the particle at Alice because he knows what it should be to conserve spin. This is similar to the classical case!

This superposition of state is what makes this different than the above classical example. In the classical case, even before Bob and Alice measure the momentum of their particles, there is no question that the particles have definite momenta all through its trajectory. Classical physics says that the momentum of each particle are already determined, we just need to measure them.

But in quantum physics, this isn't true. The superposition principle clearly has shown that in the creation of each of those two particles, the spin state are not determined, and that both possible states are present simultaneously. The spin state is only determined once a measurement is made on ONE of the particles. When that occurs, then the spin state of the other particle is also unambiguously determined.

This is why people have been asking how the other particle at Alice somehow knew the proper spin state to be in, because presumably, before any measurement is made, they both can randomly select either spin state to be in. Was there any signal sent from Bob's particle to Alice's to tell it what spin state to be in? We have found no such signal, and if there is, it has been shown that it will have to travel significantly faster than c. No matter how far apart the two daughter particles are, they somehow will know just what state to be in once one of them is measured.

This, boys and girls, is what we called quantum entanglement. The property of the quantum particles that we call "spin" is entangled between these two particles. Once the value of the spin of one particle is determined, it automatically forces the other particles to be in a corresponding state to preserve the conservation law.

But note that what is entangled is the property of the particle. It is the information about the property (spin) that is undergoing the so-called quantum teleportation. The particle itself did not get "teleported" the way they teleport things in Star Trek movies/TV series. It is the property, the information about the object, that is entangled, not the entire object itself. So in this example, the object doesn't jump around all over the place.

The physics and mathematics that describe quantum entanglement are more involved than this cartoon description, of course. There are mathematical rules resulting in physical constraints to the states and properties that are entangled. So you just can't pick up anything and say that you want to entangle it with something else. It just doesn't work that way, especially if you want to clearly observe the effects of the entanglement.

The important lesson to take away from this is that you can't learn physics in bits and pieces. If you simply focus on the "entanglement" aspect and are oblivious to understanding the existence of quantum superposition, then you will never understand why this is very different and mysterious than the classical case. In physics, it is not uncommon that you have to also understand a series of things leading up to it. This is why it is truly a knowledge and not just merely a series of disconnected information.

Zz.

by ZapperZ (noreply@blogger.com) at April 22, 2015 06:33 PM

Quantum Diaries

Italian neutrino experiment to move to the US

This article appeared in symmetry on April 22, 2015.

The world’s largest liquid-argon neutrino detector will help with the search for sterile neutrinos at Fermilab. Photo: INFN

The world’s largest liquid-argon neutrino detector will help with the search for sterile neutrinos at Fermilab. Photo: INFN

Mysterious particles called neutrinos seem to come in three varieties. However, peculiar findings in experiments over the past two decades make scientists wonder if a fourth is lurking just out of sight.

To help solve this mystery, a group of scientists spearheaded by Nobel laureate Carlo Rubbia plans to bring ICARUS, the world’s largest liquid-argon neutrino detector, across the Atlantic Ocean to the United States. The detector is currently being refurbished at CERN, where it is the first beneficiary of a new test facility for neutrino detectors.

Neutrinos are some of the most abundant and yet also most mysterious particles in the universe. They have tiny masses, but no one is sure why—or where those masses come from. They interact so rarely that they can pass through the entire Earth as if it weren’t there. They oscillate from one type to another, so that even if you start out with one kind of neutrino, it might change to another kind by the time you detect it.

Many theories in particle physics predict the existence of a sterile neutrino, which would behave differently from the three known types of neutrino.

“Finding a fourth type of neutrinos would change the whole picture we’re trying to address with current and future experiments,” says Peter Wilson, a scientist at Fermi National Accelerator Laboratory.

The Program Advisory Committee at Fermilab recently endorsed a plan, managed by Wilson, to place a suite of three detectors in a neutrino beam at the laboratory to study neutrinos—and determine whether sterile neutrinos exist.

Over the last 20 years, experiments have seen clues pointing to the possible existence of sterile neutrinos. Their influence may have caused two different types of unexpected neutrino behavior seen at the Liquid Scintillator Neutrino Detector experiment at Los Alamos National Laboratory in New Mexico and the MiniBooNE experiment at Fermilab.

Both experiments saw indications that a surprisingly large number of neutrinos may be morphing from one kind to another a short distance from a neutrino source. The existence of a fourth type of neutrino could encourage this fast transition.

The new three-detector formation at Fermilab could provide the answer to this mystery.

In the suite of experiments, a 260-ton detector called Short Baseline Neutrino Detector will sit closest to the source of the beam, so close that it will be able to detect the neutrinos before they’ve had a chance to change from one type into another. This will give scientists a baseline to compare with results from the other two detectors. SBND is under construction by a team of scientists and engineers from universities in the United Kingdom, the United States and Switzerland, working with several national laboratories in Europe and the US.

The SBND detector will be filled with liquid argon, which gives off flashes of light when other particles pass through it.

“Liquid argon is an extremely exciting technology to make precision measurements with neutrinos,” says University of Manchester physicist Stefan Soldner-Rembold, who leads the UK project building a large section of the detector. “It’s the technology we’ll be using for the next 20 to 30 years of neutrino research.”

Farther from the beam will be the existing 170-ton MicroBooNE detector, which is complete and will begin operation at Fermilab this year. The MicroBooNE detector was designed to find out whether the excess of particles seen by MiniBooNE was caused by a new type of neutrino or a new type of background. Identifying either would have major implications for future neutrino experiments.

Finally, farthest from the beam would be a liquid-argon detector more than four times the size of MicroBooNE. The 760-ton detector was used in the ICARUS experiment, which studied neutrino oscillations at Gran Sasso Laboratory in Italy using a beam of neutrinos produced at CERN from 2010 to 2014.

Its original beam at CERN is not optimized for the next stage of the sterile neutrino search. “The Fermilab beamline is the only game in town for this type of experiment,” says physicist Steve Brice, deputy head of Fermilab’s Neutrino Division.

And the ICARUS detector “is the best detector in the world to detect this kind of particle,” says Alberto Scaramelli, the former technical director of Gran Sasso National Laboratory. “We should use it.”

Rubbia, who initiated construction of ICARUS and leads the ICARUS collaboration, proposed bringing the detector to Fermilab in August 2013. Since then, the ICARUS, MicroBooNE and SBND groups have banded together to create the current proposal. The updated plan received approval from the Fermilab Program Advisory Committee in February.

“The end product was really great because it went through the full scrutiny of three different collaborations,” says MicroBooNE co-leader Sam Zeller. “The detectors all have complementary strengths.”

In December, scientists shipped the ICARUS detector from the Gran Sasso laboratory to CERN, where it is currently undergoing upgrades. The three-detector short-baseline neutrino program at Fermilab is scheduled to begin operation in 2018.

Kathryn Jepsen

by Fermilab at April 22, 2015 05:28 PM

Symmetrybreaking - Fermilab/SLAC

Italian neutrino experiment to move to the US

The world’s largest liquid-argon neutrino detector will help with the search for sterile neutrinos at Fermilab. 

Mysterious particles called neutrinos seem to come in three varieties. However, peculiar findings in experiments over the past two decades make scientists wonder if a fourth is lurking just out of sight.

To help solve this mystery, a group of scientists spearheaded by Nobel laureate Carlo Rubbia plans to bring ICARUS, the world’s largest liquid-argon neutrino detector, across the Atlantic Ocean to the United States. The detector is currently being refurbished at CERN, where it is the first beneficiary of a new test facility for neutrino detectors.

Neutrinos are some of the most abundant and yet also most mysterious particles in the universe. They have tiny masses, but no one is sure why—or where those masses come from. They interact so rarely that they can pass through the entire Earth as if it weren’t there. They oscillate from one type to another, so that even if you start out with one kind of neutrino, it might change to another kind by the time you detect it.

Many theories in particle physics predict the existence of a sterile neutrino, which would behave differently from the three known types of neutrino.

“Finding a fourth type of neutrinos would change the whole picture we’re trying to address with current and future experiments,” says Peter Wilson, a scientist at Fermi National Accelerator Laboratory.

The Program Advisory Committee at Fermilab recently endorsed a plan, managed by Wilson, to place a suite of three detectors in a neutrino beam at the laboratory to study neutrinos—and determine whether sterile neutrinos exist.

Over the last 20 years, experiments have seen clues pointing to the possible existence of sterile neutrinos. Their influence may have caused two different types of unexpected neutrino behavior seen at the Liquid Scintillator Neutrino Detector experiment at Los Alamos National Laboratory in New Mexico and the MiniBooNE experiment at Fermilab.

Both experiments saw indications that a surprisingly large number of neutrinos may be morphing from one kind to another a short distance from a neutrino source. The existence of a fourth type of neutrino could encourage this fast transition.

The new three-detector formation at Fermilab could provide the answer to this mystery.

In the suite of experiments, a 260-ton detector called Short Baseline Neutrino Detector will sit closest to the source of the beam, so close that it will be able to detect the neutrinos before they’ve had a chance to change from one type into another. This will give scientists a baseline to compare with results from the other two detectors. SBND is under construction by a team of scientists and engineers from universities in the United Kingdom, the United States and Switzerland, working with several national laboratories in Europe and the US.

The SBND detector will be filled with liquid argon, which gives off flashes of light when other particles pass through it.

“Liquid argon is an extremely exciting technology to make precision measurements with neutrinos,” says University of Manchester physicist Stefan Soldner-Rembold, who leads the UK project building a large section of the detector. “It’s the technology we’ll be using for the next 20 to 30 years of neutrino research.”

Farther from the beam will be the existing 170-ton MicroBooNE detector, which is complete and will begin operation at Fermilab this year. The MicroBooNE detector was designed to find out whether the excess of particles seen by MiniBooNE was caused by a new type of neutrino or a new type of background. Identifying either would have major implications for future neutrino experiments.

Finally, farthest from the beam would be a liquid-argon detector more than four times the size of MicroBooNE. The 760-ton detector was used in the ICARUS experiment, which studied neutrino oscillations at Gran Sasso Laboratory in Italy using a beam of neutrinos produced at CERN from 2010 to 2014.

Its original beam at CERN is not optimized for the next stage of the sterile neutrino search. “The Fermilab beamline is the only game in town for this type of experiment,” says physicist Steve Brice, deputy head of Fermilab’s Neutrino Division.

And the ICARUS detector “is the best detector in the world to detect this kind of particle,” says Alberto Scaramelli, the former technical director of Gran Sasso National Laboratory. “We should use it.”

Rubbia, who initiated construction of ICARUS and leads the ICARUS collaboration, proposed bringing the detector to Fermilab in August 2013. Since then, the ICARUS, MicroBooNE and SBND groups have banded together to create the current proposal. The updated plan received approval from the Fermilab Program Advisory Committee in February.

“The end product was really great because it went through the full scrutiny of three different collaborations,” says MicroBooNE co-leader Sam Zeller. “The detectors all have complementary strengths.”

In December, scientists shipped the ICARUS detector from the Gran Sasso laboratory to CERN, where it is currently undergoing upgrades. The three-detector short-baseline neutrino program at Fermilab is scheduled to begin operation in 2018.

 

Like what you see? Sign up for a free subscription to symmetry!

by Kathryn Jepsen at April 22, 2015 01:31 PM

April 21, 2015

Symmetrybreaking - Fermilab/SLAC

Mu2e breaks ground on experiment

Scientists seek rare muon conversion that could signal new physics.

This weekend, members of the Mu2e collaboration dug their shovels into the ground of Fermilab's Muon Campus for the experiment that will search for the direct conversion of a muon into an electron in the hunt for new physics.

For decades, the Standard Model has stood as the best explanation of the subatomic world, describing the properties of the basic building blocks of matter and the forces that govern them. However, challenges remain, including that of unifying gravity with the other fundamental forces or explaining the matter-antimatter asymmetry that allows our universe to exist. Physicists have since developed new models, and detecting the direct conversion of a muon to an electron would provide evidence for many of these alternative theories.

"There's a real possibility that we'll see a signal because so many theories beyond the Standard Model naturally allow muon-to-electron conversion," said Jim Miller, a co-spokesperson for Mu2e. "It'll also be exciting if we don't see anything, since it will greatly constrain the parameters of these models."

Muons and electrons are two different flavors in the charged-lepton family. Muons are 200 times more massive than electrons and decay quickly into lighter particles, while electrons are stable and live forever. Most of the time, a muon decays into an electron and two neutrinos, but physicists have reason to believe that once in a blue moon, muons will convert directly into an electron without releasing any neutrinos. This is physics beyond the Standard Model.

Under the Standard Model, the muon-to-electron direct conversion happens too rarely to ever observe. In more sophisticated models, however, this occurs just frequently enough for an extremely sensitive machine to detect.

The Mu2e detector, when complete, will be the instrument to do this. The 92-foot-long apparatus will have three sections, each with its own superconducting magnet. Its unique S-shape was designed to capture as many slow muons as possible with an aluminum target. The direct conversion of a muon to an electron in an aluminum nucleus would release exactly 105 million electronvolts of energy, which means that if it occurs, the signal in the detector will be unmistakable. Scientists expect Mu2e to be 10,000 times more sensitive than previous attempts to see this process.

Construction will now begin on a new experimental hall for Mu2e. This hall will eventually house the detector and the infrastructure needed to conduct the experiment, such as the cryogenic systems to cool the superconducting magnets and the power systems to keep the machine running.

"What's nice about the groundbreaking is that it becomes a real thing. It's a long haul, but we'll get there eventually, and this is a start," said Julie Whitmore, deputy project manager for Mu2e.

The detector hall will be complete in late 2016. The experiment, funded mainly by the Department of Energy Office of Science, is expected to begin in 2020 and run for three years until peak sensitivity is reached.

"This is a project that will be moving along for many years. It won't just be one shot," said Stefano Miscetti, the leader of the Italian INFN group, Mu2e's largest international collaborator. "If we observe something, we will want to measure it better. If we don't, we will want to increase the sensitivity."

Physicists around the world are working to extend the frontiers of the Standard Model. One hundred seventy-eight people from 31 institutions are coming together for Mu2e to make a significant impact on this venture.

"We're sensitive to the same new physics that scientists are searching for at the Large Hadron Collider, we just look for it in a complementary way," said Ron Ray, Mu2e project manager. "Even if the LHC doesn't see new physics, we could see new physics here."

 

Like what you see? Sign up for a free subscription to symmetry!

by Diana Kwon at April 21, 2015 02:18 PM

Clifford V. Johnson - Asymptotia

In Case You Wondered…
Dear visitor who came here (perhaps) after visiting the panel I participated in on Saturday at the LA Times Festival of Books. ("Grasping the Ineffable: On Science and Health") What a fun discussion! Pity we ran out of time before we really began to explore connections, perhaps inspired by more audience questions. In any event, in case you wondered why I was not signing books at the end at the designated signing area, I thought I'd write this note. I was given the option to do so, but the book that I currently have out is a specialist monograph, and I did not think there's be much demand for it at a general festival such as the one on the weekend. (Feel free to pick up a copy if you wish, though. It is called "D-Branes", and it is here.) The book I actually mentioned during the panel, since it is indeed among my current attempts to grasp the "ineffable" of the panel title, is a work in progress. (Hence my variant of the "under construction" sign on the right.) It is a graphic book (working title "The Dialogues") pitched at a general audience that explores a lot of contemporary physics topics in an unusual way. It is scheduled for publication in 2017 by Imperial College Press. You can find out much more about it here. Feel free to visit this blog for updates on how the book progresses, and of course lots of other topics and conversations too (which you are welcome to join). -cvj Click to continue reading this post

by Clifford at April 21, 2015 02:15 PM

Quantum Diaries

Mu2e breaks ground on experiment seeking new physics

This article appeared in Fermilab Today on April 21, 2015.

Fermilab's Mu2e groundbreaking ceremony took place on Saturday, April 18. From left: Alan Stone (DOE Office of High Energy Physics), Nigel Lockyer (Fermilab director), Jim Siegrist (DOE Office of High Energy Physics director), Ron Ray (Mu2e project manager), Paul Philp (Mu2e federal project director at the Fermi Site Office), Jim Miller (Mu2e co-spokesperson), Doug Glenzinski (Mu2e co-spokesperson), Martha Michels (Fermilab ESH&Q head), Mike Shrader (Middough architecture firm), Julie Whitmore (Mu2e deputy project manager), Jason Whittaker (Whittaker Construction), Tom Lackowski (FESS). Photo: Reidar Hahn

Fermilab’s Mu2e groundbreaking ceremony took place on Saturday, April 18. From left: Alan Stone (DOE Office of High Energy Physics), Nigel Lockyer (Fermilab director), Jim Siegrist (DOE Office of High Energy Physics director), Ron Ray (Mu2e project manager), Paul Philp (Mu2e federal project director at the Fermi Site Office), Jim Miller (Mu2e co-spokesperson), Doug Glenzinski (Mu2e co-spokesperson), Martha Michels (Fermilab ESH&Q head), Mike Shrader (Middough architecture firm), Julie Whitmore (Mu2e deputy project manager), Jason Whittaker (Whittaker Construction), Tom Lackowski (FESS). Photo: Reidar Hahn

This weekend, members of the Mu2e collaboration dug their shovels into the ground of Fermilab’s Muon Campus for the experiment that will search for the direct conversion of a muon into an electron in the hunt for new physics.

For decades, the Standard Model has stood as the best explanation of the subatomic world, describing the properties of the basic building blocks of matter and the forces that govern them. However, challenges remain, including that of unifying gravity with the other fundamental forces or explaining the matter-antimatter asymmetry that allows our universe to exist. Physicists have since developed new models, and detecting the direct conversion of a muon to an electron would provide evidence for many of these alternative theories.

“There’s a real possibility that we’ll see a signal because so many theories beyond the Standard Model naturally allow muon-to-electron conversion,” said Jim Miller, a co-spokesperson for Mu2e. “It’ll also be exciting if we don’t see anything, since it will greatly constrain the parameters of these models.”

Muons and electrons are two different flavors in the charged-lepton family. Muons are 200 times more massive than electrons and decay quickly into lighter particles, while electrons are stable and live forever. Most of the time, a muon decays into an electron and two neutrinos, but physicists have reason to believe that once in a blue moon, muons will convert directly into an electron without releasing any neutrinos. This is physics beyond the Standard Model.

Under the Standard Model, the muon-to-electron direct conversion happens too rarely to ever observe. In more sophisticated models, however, this occurs just frequently enough for an extremely sensitive machine to detect.

The Mu2e detector, when complete, will be the instrument to do this. The 92-foot-long apparatus will have three sections, each with its own superconducting magnet. Its unique S-shape was designed to capture as many slow muons as possible with an aluminum target. The direct conversion of a muon to an electron in an aluminum nucleus would release exactly 105 million electronvolts of energy, which means that if it occurs, the signal in the detector will be unmistakable. Scientists expect Mu2e to be 10,000 times more sensitive than previous attempts to see this process.

Construction will now begin on a new experimental hall for Mu2e. This hall will eventually house the detector and the infrastructure needed to conduct the experiment, such as the cryogenic systems to cool the superconducting magnets and the power systems to keep the machine running.

“What’s nice about the groundbreaking is that it becomes a real thing. It’s a long haul, but we’ll get there eventually, and this is a start,” said Julie Whitmore, deputy project manager for Mu2e.

The detector hall will be complete in late 2016. The experiment, funded mainly by the Department of Energy Office of Science, is expected to begin in 2020 and run for three years until peak sensitivity is reached.

“This is a project that will be moving along for many years. It won’t just be one shot,” said Stefano Miscetti, the leader of the Italian INFN group, Mu2e’s largest international collaborator. “If we observe something, we will want to measure it better. If we don’t, we will want to increase the sensitivity.”

Physicists around the world are working to extend the frontiers of the Standard Model. One hundred seventy-eight people from 31 institutions are coming together for Mu2e to make a significant impact on this venture.

“We’re sensitive to the same new physics that scientists are searching for at the Large Hadron Collider, we just look for it in a complementary way,” said Ron Ray, Mu2e project manager. “Even if the LHC doesn’t see new physics, we could see new physics here.”

Diana Kwon

See a two-minute video on the ceremony

by Fermilab at April 21, 2015 02:10 PM

astrobites - astro-ph reader's digest

Signals from Hidden Dwarf Galaxies

Title: Beacons in the Dark: Using Novae and Supernovae to Detect Dwarf Galaxies in the Local Universe
Authors: Charlie Conroy and James S. Bullock
First Author’s institution: Dept. of Astronomy, Harvard University
Status: Accepted to ApJ

Dwarf galaxies are, as the name implies, the smallest of the galaxies. In our local neighborhood around the Milky Way, they range in size from the small Segue 2, with a mass of about 5 x 105  solar masses, to the Large Magellanic Cloud (LMC), with a total mass of about 1010 solar masses. Figure 1 shows an image of the LMC, and its companion, the Small Magellanic Cloud. Although we have a fairly good understanding of galaxy formation and evolution for massive galaxies (think the Milky Way and bigger), our understanding of these smallest galaxies is not nearly as complete. This is partly because these smaller, less luminous galaxies are challenging to observe at distances larger than a few megaparsecs (Mpc) from our Milky Way (the nearest massive galaxy to us, Andromeda, is about 0.77 Mpc away). Within that distance, we can detect them by resolving their stars. But for far away dwarf galaxies, an improved detection method to increase the number of observed dwarf galaxies would go a long way towards improving what we know of these small galaxies.

Figure 1: An optical image of the large and small magellanic clouds. (Source:

Figure 1: An optical image of the large and small magellanic clouds. These are both Local Group dwarf galaxies, and satellites of the Milky Way. (Source: ESO)

The authors of today’s astrobite propose a new method relying on novae and supernovae explosions to detect dwarf galaxies farther away than what is currently possible. Novae and supernovae are among the brightest astronomical events that we can observe. Novae occur in binary star pairs containing a white dwarf and a larger companion star. As gas accretes from the larger star onto the white dwarf, the hydrogen on the surface of the white dwarf eventually hits a critical mass, nuclear burning occurs, and a nova is produced. This is not to be confused with a Type 1a supernova, however, where the total mass of the white dwarf hits a critical limit (1.4 solar masses), and explodes. Both of these events are very luminous, but short lived. Detecting them is often a matter of getting lucky (looking in the right direction at the right time). The authors propose using current and ongoing surveys designed to detect these transient events to identify new and otherwise unobservable dwarf galaxies. (see this related astrobite discussing even another way to detect dwarf galaxies.) The authors use one of these, the planned Large Synoptic Survey Telescope (LSST), as a baseline to judge what will be observable in the near future.

The Faintest of them All

Figure 1: The surface brightness (top) and radius (bottom) of all known dwarf galaxies in our local neighborhood as a function of their absolute visual magnitude (bottom axis). This is converted to stellar mass in the top axis. In the top plot, the dashed line shows the detection limit of the LSST for galaxies where the individual stars cannot be resolved (i.e. for galaxies farther than about 3 Mpc from us). The dashed lines in the lower plot show what is observable

Figure 2: The surface brightness (left) and radius (right) of all known dwarf galaxies in our local neighborhood as a function of their absolute visual magnitude (bottom axis). This is converted to stellar mass in the top axis. In the left plot, the dashed line shows the detection limit of the LSST for galaxies where the individual stars cannot be resolved (i.e. for galaxies farther than about 3 Mpc from us). The dashed lines in the right plot show (roughly) the distances out to which galaxies of the given radius are resolvable by LSST. (Source: Conroy & Bullock 2015)

Figure 2 illustrates the difficulty in directly detecting dwarf galaxies. Shown is the surface brightness (left) and radius (right) of all known nearby dwarf galaxies as a function of the absolute visual magnitude of the dwarf galaxies (translated to stellar mass at the top axis). Most dwarf galaxies have been detected by resolving their individual stars. However, this can only be done out to a few Mpc; beyond this, they can only be detected if they are above a certain brightness. The dashed line in the left plot shows this limit for the faintest object the LSST can detect. In other words, anything below the dashed line can only be detected if it is within a few Mpc of us. Therefore there may be many galaxies that we simply cannot (yet) observe. In the right hand plot, the dashed lines give the farthest distance galaxies of the indicated size (0.1 kpc and 1.0 kpc) can be resolved by LSST (20 and 200 Mpc away respectively).

The authors argue that ongoing and upcoming surveys looking for transient events, like the LSST, will observe many novae and supernovae associated with undiscovered dwarf galaxies. In order to predict the likelihood of this occurring, and the types dwarf galaxies we may discover, the authors construct a model to predict the rate of novae and supernovae in dwarf galaxies. The authors take what we know about the distribution of dwarf galaxies, how their star formation rates and histories vary as a function of their stellar masses, and make some assumptions about how often novae, Type 1a supernovae, and Type II supernovae occur as a function of the star formation rate of the galaxy. Combining these factors, and noting the significant uncertainties in some of their assumptions, the authors predict rates of novae and supernova in dwarf galaxies as a function of the dwarf galaxy’s stellar mass.

Figure 2:

Figure 3: The predicted supernova and novae rates occurring in dwarf galaxies within a certain distance (horizontal axis) away from the Milky Way. The left and right plots show the predictions for two different assumptions on the relationship between stellar mass and dark matter mass in dwarf galaxies (the lines in the right plot are shifted upwards compared to the left). The lines in each plot show the rates for galaxies of three different stellar masses. The dotted portions of these lines show which of these galaxies are spatially resolvable, solid show unresolved but brighter than the limit in Figure 2, and dashed show those that are currently undetectable. Galaxies that fall in the gray and blue boxes can be detected via the LSST through novae and supernovae respectively (Source: Conroy & Bullock 2015)

This is shown in Figure 3 for two different assumptions on how the stellar mass of a dwarf galaxy scales with the dark matter mass of that galaxy. Shown is the supernova and nova rates in dwarf galaxies within a certain distance (horizontal axis) from us. The three lines in each show these rates for galaxies with stellar masses less than 108, 106, and 105 solar masses. The dotted (leftmost) portions of each line show distances galaxies of that mass can be spatially resolved (i.e. where we don’t need this new method), solid shows unresolved but a brightness greater than the limit given in Figure 2, and dashed are currently unobservable. Due in part to the “cadence” of the LSST (the length of the camera exposures and the frequency with which a given region is re-observed during the survey) there are limits as to which of these galaxies can be observed with the new method. The authors predict that galaxies which fall in the gray and blue boxes will be detectable by the LSST via novae and supernovae respectively.

Matching Supernova to Dwarf Galaxies

With their calculations, the authors conclude that the upcoming LSST should be able to detect 10-100 novae from dwarf galaxies with stellar masses of 105-106 solar masses every year out to about 30 Mpc, and 100-10,000 supernova in these galaxies every year. With this, the LSST can be used to detect many more dwarf galaxies than currently known. Even though they are so faint, once we have discovered where these dwarf galaxies lie, we can use focused follow up observations to observe them directly. With an increased sample of known dwarf galaxies stretching far from our Local Group of galaxies, we can better understand how galaxies form and evolve on the smallest scales.

by Andrew Emerick at April 21, 2015 01:33 PM

Sean Carroll - Preposterous Universe

Quantum Field Theory and the Limits of Knowledge

Last week I had the pleasure of giving a seminar to the philosophy department at the University of North Carolina. Ordinarily I would have talked about the only really philosophical work I’ve done recently (or arguably ever), deriving the Born Rule in the Everett approach to quantum mechanics. But in this case I had just talked about that stuff the day before, at a gathering of local philosophers of science.

So instead I decided to use the opportunity to get some feedback on another idea I had been thinking about — our old friend, the claim that The Laws of Physics Underlying Everyday Life Are Completely Understood (also here, here). In particular, given that I was looking for feedback from a group of people that had expertise in philosophical matters, I homed in on the idea that quantum field theory has a unique property among physical theories: any successful QFT tells us very specifically what its domain of applicability is, allowing us to distinguish the regime where it should be accurate from the regime where we can’t make predictions.

The talk wasn’t recorded, but here are the slides. I recycled a couple of ones from previous talks, but mostly these were constructed from scratch.

The punchline of the talk was summarized in this diagram, showing different regimes of phenomena and the arrows indicating what they depend on:

layers

There are really two arguments going on here, indicated by the red arrows with crosses through them. These two arrows, I claim, don’t exist. The physics of everyday life is not affected by dark matter or any new particles or forces, and its only dependence on the deeper level of fundamental physics (whether it be string theory or whatever) is through the intermediary of what Frank Wilczek has dubbed “The Core Theory” — the Standard Model plus general relativity. The first argument (no new important particles or forces) relies on basic features of quantum field theory, like crossing symmetry and the small number of species that go into making up ordinary matter. The second argument is more subtle, relying on the idea of effective field theory.

So how did it go over? I think people were properly skeptical and challenging, but for the most part they got the point, and thought it was interesting. (Anyone who was in the audience is welcome to chime in and correct me if that’s a misimpression.) Mostly, since this was a talk to philosophers rather than physicists, I spent my time doing a pedagogical introduction to quantum field theory, rather than diving directly into any contentious claims about it — and learning something new is always a good thing.

by Sean Carroll at April 21, 2015 01:32 PM

April 20, 2015

astrobites - astro-ph reader's digest

The Lives of the Longest Lived Stars
  • Title: The End of the Main Sequence
  • Authors: Gregory Laughlin, Peter Bodenheimer, and Fred C. Adams
  • First Author’s Institution: University of Michigan (when published), University of California at Santa Cruz (current)
  • Publication year: 1997

Heavy stars live like rock stars: they live fast, become big, and die young. Low mass stars, on the other hand, are more persistent, and live longer. The ages of the former stars are measured in millions to billions of years; the expected lifetimes of the latter are measured in trillions. Low mass stars are the turtle that beats the hare.

red_dwarf_art

Figure 1: An artist’s impression of a low-mass dwarf star. Figure from here.

But why do we want to study the evolution of low mass stars, and their less than imminent demise? There are various good reasons. First, galaxies are composed of stars —and other things, but here we focus on the stars. Second, low-mass stars are by far the most numerous stars in the galaxy, about 70% of stars in the Milky Way are less than 0.3 solar masses (also denoted as 0.3M). Third, low-mass stars provide useful insights into stellar evolution: if you want to understand why heavier mass stars evolve in a certain way —e.g. develop into red giants— it is helpful to take a careful look at why the lowest mass stars do not.

Todays paper was published in 1997, and marked the first time when the evolution and long-term fate of the lowest mass stars were calculated. It still gives a great overview of their lifespans, which we look at in this astrobite.

Stellar evolution: The life of a 0.1M star

The authors use numerical methods to evolve the lowest mass stars. The chart below summarizes the lifespan of a 0.1M star on the Hertzsprung-Russell diagram, which plots a star’s luminosity as a function of effective temperature. The diagram is the star’s Facebook wall; it gives insight into events in the star’s life. Let’s dive in and follow the star’s lifespan, starting from the beginning.

The star starts out as a protostar, a condensing molecular cloud that descends down the Hayashi track. As the protostar condenses it releases gravitational energy, it gets hotter, and pressures inside it increase. After about 2 billion years of contraction, hydrogen fusion starts in the core. We have reached the Zero Age Main Sequence (ZAMS), where the star will spend most of its life, fusing hydrogen to helium.

Figure 2: The life a 0.1M star shown on the Hertzsprung-Russell diagram, where temperature increases to the left. Interesting life-events labelled. Figure 1 from the paper, with an annotated arrow.

 

The fusion process creates two isotopes of helium: 3He, an intermediate product, and 4He, the end product. The inset chart plots the core composition of H, 3He, and 4He. We see that for the first trillion (note trillion) years hydrogen drops, while 4He increases. 3He reaches a maximum, and then tapers off. As the star’s average molecular weight increases, the star grows hotter and more luminous. It moves to the upper left on the diagram. The star has now been evolving for roughly 5.7 trillion years, slowly turning into a hot helium dwarf.

The red arrow on the diagram marks a critical juncture in the star’s life. Before now, the energy created by fusion has been transported by convection, which heats up the stellar material, causing it to move and mix with other colder parts of the star, much in a same way how a conventional radiator heats your room. This has kept the star well mixed, and maintained a homogeneous chemical composition throughout the star. Now, the physics behind the energy transport changes. The increasing amounts of helium lower the opacity of the star, a measure of radiation impenetrability. Lowering the opacity makes it easier for photons to travel larger distances inside the star, making them more effective than convection at transporting energy. We say that the stellar core becomes radiative. This causes the entire star to contract and produces a sudden decline in luminosity (see red arrow).

red_dwarf

Figure 3: The interior of a 0.1M star. The red arrow in Figure 2 marks the point where the star’s core changes from being convective to radiative. Figure from here.

Now the evolutionary timescale accelerates. The core, now pure helium, continues to increase in mass as hydrogen is exhausted in a nuclear shell around it. On the Hertzsprung-Russell diagram the star moves rapidly to higher temperatures, and will eventually grow hotter than the current Sun, but only 1% as bright.  Afterwards, the star turns a corner. The star starts to cool off, the shell source is slowly extinguished, and the luminosity decreases. The star is on the cooling curve, moving towards Florida on the Hertzsprung-Russel diagram, on its way to become a low-mass helium white dwarf.

The total nuclear burning lifetime of the star is somewhat more than 6 trillion years, and during that time the star used up 99% of its initial hydrogen; the Sun will only burn about 10%. Incredible efficiency.

The lifespans of 0.06M – 0.20M stars

Additionally, the authors compare the lifespans of stars with masses similar to the 0.1M star. Their results are shown in Figure 4. The lightest object, a 0.06M star, never starts fusing. Instead, it rapidly cools, and fades away as a brown dwarf. Stars with masses between 0.08M and 0.16M have similar lives to the star in Figure 2. All of them travel increasingly to the left on the Hertzsprung-Russell diagram after developing a radiative core. The radiative cores appear at progressively earlier times in the evolution as the masses increase. Stars in the mass range 0.16M-0.20M behave differently, and the authors mark them as an important transition group. These stars have a growing ability to swell, compared to the lighter stars. This property is what ultimately fuels even higher mass stars to become red giants.

laughlin_hr_diagram2

Figure 4: The evolution of stars with masses between 0.06M and 0.25M shown on a Hertzsprung-Russell diagram. The inset chart shows that stellar lifetimes drop with increasing mass. Figure 2 from the paper.

Implications

Fusing hydrogen slow and steady wins the stellar age-race. We see that the lowest mass stars can reach ages that greatly exceed the current age of the universe — by a whooping factor of 100-1000! These stars are both the longest lived, and also the most numerous in the galaxy and the universe. Most of the stellar evolution that will occur is yet to come.

by Gudmundur Stefansson at April 20, 2015 07:44 PM

ZapperZ - Physics and Physicists

Cyclotron Radiation From One Electron
It is a freakingly cool experiment!

We now can see the cyclotron radiation from a single electron, folks!

The researchers plotted the detected radiation power as a function of time and frequency (Fig. 2). The bright, upward-angled streaks of radiation indicate the radiation emitted by a single electron. It is well known theoretically that a circling electron continuously emits radiation. As a result, it gradually loses energy and orbits at a rate that increases linearly in time. The detected radiation streaks have the same predicted linear dependence, which is what allowed the researchers to associate them with a single electron. 

Of course, we have seen such effects for many electrons in synchrotron rings all over the world, but to not only see it for one electron, but to also see how it loses energy as it orbits around is rather neat. It reinforces the fact that we can't really imagine electrons "orbiting" around a nucleus in an atom in the classical way, because if they do, we would detect such cyclotron radiation and that they will eventually crash into the nucleus.

But I also find it interesting that this has more to do with the effort in trying to determine the mass of a neutrino independent of the neutrino mass oscillation via measuring the electrons mass to high accuracy in beta decay.

Zz.

by ZapperZ (noreply@blogger.com) at April 20, 2015 07:13 PM

Clifford V. Johnson - Asymptotia

Festivities (I)
Love this picture posted by USC's Facebook page*. (I really hope that we did not go over the heads of our - very patient** - audience during the Festival of Books panel...) Screen Shot 2015-04-20 at 08.59.59 -cvj *They don't give a photo credit, so I'm pointing you back to the posting here until I work it out. [...] Click to continue reading this post

by Clifford at April 20, 2015 04:12 PM

Jester - Resonaances

2014 Mad Hat awards
New Year is traditionally the time of recaps and best-ofs. This blog is focused on particle physics beyond the standard model where compiling such lists is challenging, given the dearth of discoveries or even   plausible signals pointing to new physics.  Therefore I thought I could somehow honor those who struggle to promote our discipline by finding new signals against all odds, and sometimes against all logic. Every year from now on, the Mad Hat will be awarded to the researchers who make the most outlandish claim of a particle-physics-related discovery, on the condition it gets enough public attention.

The 2014 Mad Hat award unanimously goes to Andy Read, Steve Sembay, Jenny Carter, Emile Schyns, and, posthumously, to George Fraser, for the paper Potential solar axion signatures in X-ray observations with the XMM–Newton observatory. Although the original arXiv paper sadly went unnoticed, this remarkable work was publicized several months later by the Royal Astronomical Society press release and by the article in Guardian.

The crucial point in this kind of endeavor is to choose an observable that is noisy enough to easily accommodate a new physics signal. In this particular case the observable is x-ray emission from Earth's magnetosphere, which could include a component from axion dark matter emitted from the Sun and converting to photons. A naive axion hunter might expect the conversion signal should be observed by looking at the sun (that is the photon inherits the momentum of the incoming axion), something that XMM cannot do due to technical constraints. The authors thoroughly address this point in a sentence in Introduction, concluding that it would be nice if the x-rays could scatter afterwards at the right angle. Then the signal that is searched for is an annual modulation of the x-ray emission, as the magnetic field strength in XMM's field of view is on average larger in summer than in winter. A seasonal dependence of the x-ray flux is indeed observed, for which axion dark matter is clearly the most plausible explanation.

Congratulations to all involved. Nominations for the 2015 Mad Hat award are open as of today ;) Happy New Year everyone!

by Jester (noreply@blogger.com) at April 20, 2015 04:03 PM

Jester - Resonaances

Weekend Plot: Fermi and more dwarfs
This weekend's plot comes from the recent paper of the Fermi collaboration:

It shows the limits on the cross section of dark matter annihilation into tau lepton pairs. The limits are obtained from gamma-ray observations of 15 dwarf galaxies during 6 years. Dwarf galaxies are satellites of Milky Way made mostly of dark matter with few stars in it, which makes them a clean environment to search for dark matter signals. This study is particularly interesting because it is sensitive to dark matter models that could explain the gamma-ray excess detected from the center of the Milky Way.  Similar limits for the annihilation into b-quarks have already been shown before at conferences. In that case, the region favored by the Galactic center excess seems entirely excluded. Annihilation of 10 GeV dark matter into tau leptons could also explain the excess. As can be seen in the plot, in this case there is also  large tension with the dwarf limits, although astrophysical uncertainties help to keep hopes alive.  

Gamma-ray observations by Fermi will continue for another few years, and the limits will get stronger.   But a faster way to increase the statistics may be to find more observation targets. Numerical simulations with vanilla WIMP dark matter predict a few hundred dwarfs around the Milky Way. Interestingly, a discovery of several new dwarf candidates was reported last week. This is an important development, as the total number of known dwarf galaxies now exceeds the number of dwarf characters in Peter Jackson movies. One of the candidates, known provisionally as DES J0335.6-5403 or  Reticulum-2, has a large J-factor (the larger the better, much like the h-index).  In fact, some gamma-ray excess around 1-10 GeV is observed from this source, and one paper last week even quantified its significance as ~4 astrosigma (or ~3 astrosigma in an alternative more conservative analysis). However, in the Fermi analysis using  more recent reconstruction Pass-8 photon reconstruction,  the significance quoted is only 1.5 sigma. Moreover the dark matter annihilation cross section required to fit the excess is excluded by an order of magnitude by the combined dwarf limits. Therefore,  for the moment, the excess should not be taken seriously.

by Jester (noreply@blogger.com) at April 20, 2015 04:03 PM

Jester - Resonaances

LHCb: B-meson anomaly persists
Today LHCb released a new analysis of the angular distribution in  the B0 → K*0(892) (→K+π-) μ+ μ- decays. In this 4-body decay process, the angles between the direction of flight of all the different particles can be measured as a function of the invariant mass  q^2 of the di-muon pair. The results are summarized in terms of several form factors with imaginative names like P5', FL, etc. The interest in this particular decay comes from the fact that 2 years ago LHCb reported a large deviation from the standard model prediction in one q^2 region of 1 form factor called P5'. That measurement was based on 1 inverse femtobarn of data;  today it was updated to full 3 fb-1 of run-1 data. The news is that the anomaly persists in the q^2 region 4-8 GeV, see the plot.  The measurement  moved a bit toward the standard model, but the statistical errors have shrunk as well.  All in all, the significance of the anomaly is quoted as 3.7 sigma, the same as in the previous LHCb analysis. New physics that effectively induces new contributions to the 4-fermion operator (\bar b_L \gamma_\rho s_L) (\bar \mu \gamma_\rho \mu) can significantly improve agreement with the data, see the blue line in the plot. The preference for new physics remains remains high, at the 4 sigma level, when this measurement is combined with other B-meson observables.

So how excited should we be? One thing we learned today is that the anomaly is unlikely to be a statistical fluctuation. However, the observable is not of the clean kind, as the measured angular distributions are  susceptible to poorly known QCD effects. The significance depends a lot on what is assumed about these uncertainties, and experts wage ferocious battles about the numbers. See for example this paper where larger uncertainties are advocated, in which case the significance becomes negligible. Therefore, the deviation from the standard model is not yet convincing at this point. Other observables may tip the scale.  If a  consistent pattern of deviations in several B-physics observables emerges,  only then we can trumpet victory.


Plots borrowed from David Straub's talk in Moriond; see also the talk of Joaquim Matias with similar conclusions. David has a post with more details about the process and uncertainties. For a more popular write-up, see this article on Quanta Magazine. 

by Jester (noreply@blogger.com) at April 20, 2015 04:01 PM

Matt Strassler - Of Particular Significance

Completed Final Section of Article on Dark Matter and LHC

As promised, I’ve completed the third section, as well as a short addendum to the second section, of my article on how experimenters at the Large Hadron Collider [LHC] can try to discover dark matter particles.   The article is here; if you’ve already read what I wrote as of last Wednesday, you can pick up where you left off by clicking here.

Meanwhile, in the last week there were several dark-matter related stories that hit the press.

There has been a map made by the Dark Energy Survey of dark matter’s location across a swathe of the universe, based on the assumption that weak signals of gravitational lensing (bending of light by gravity) that cannot be explained by observed stars and dust is due to dark matter.  This will be useful down the line as we test simulations of the universe such as the one I referred you to on Wednesday.

There’s been a claim that dark matter interacts with itself, which got a lot of billing in the BBC; however one should be extremely cautious with this one, and the BBC editor should have put the word “perhaps” in the headline! It’s certainly possible that dark matter interacts with itself much more strongly than it interacts with ordinary matter, and many scientists (including myself) have considered this possibility over the years.  However, the claim reported by the BBC is considered somewhat dubious even by the authors of the study, because the little group of four galaxies they are studying is complicated and has to be modeled carefully.  The effect they observed may well be due to ordinary astrophysical effects, and in any case it is less than 3 Standard Deviations away from zero, which makes it more a hint than evidence.  We will need many more examples, or a far more compelling one, before anyone will get too excited about this.

Finally, the AMS experiment (whose early results I reported on here; you can find their September update here) has released some new results, but not yet in papers, so there’s limited information.  The most important result is the one whose details will apparently take longest to come out: this is the discovery (see the figure below) that the ratio of anti-protons to protons in cosmic rays of energies above 100 GeV is not decreasing as was expected. (Note this is a real discovery by AMS alone — in contrast the excess positron-to-electron ratio at similar energies, which was discovered by PAMELA and confirmed by AMS.)  The only problem is that they’ve made the discovery seem very exciting and dramatic by comparing their work to expectations from a model that is out of date and that no one seems to believe.  This model (the brown swathe in the Figure below) tries to predict how high-energy anti-protons are produced (“secondary production”) from even higher energy protons in cosmic rays.  Newer versions of this models are apparently significantly higher than the brown curve. Moreover, some scientists claim also that the uncertainty band (the width of the brown curve) on these types of models is wider than shown in the Figure.  At best, the modeling needs a lot more study before we can say that this discovery is really in stark conflict with expectations.  So stay tuned, but again, this is not yet something that in which one can have confidence.  The experts will be busy.

Figure 1. Antiproton to proton ratio measured by AMS. As seen, the measured ratio cannot be explained by existing models of secondary production.

Figure 1. Antiproton to proton ratio (red data points, with uncertainties given by vertical bars) as measured by AMS. AMS claims that the measured ratio cannot be explained by existing models of secondary production, but the model shown (brown swathe, with uncertainties given by the width of the swathe) is an old one; newer ones lie closer to the data. Also, the uncertainties in the models are probably larger than shown. Whether this is a true discrepancy with expectations is now a matter of healthy debate among the experts.


Filed under: Uncategorized

by Matt Strassler at April 20, 2015 12:38 PM

Lubos Motl - string vacua and pheno

ATLAS: 2.5-sigma four-top-quark excess
ATLAS has posted a new preprint
Analysis of events with \(b\)-jets and a pair of leptons of the same charge in \(pp\)-collisions at \(\sqrt s = 8\TeV\) with the ATLAS detector
boasting numerous near-2-sigma excesses (which could be explained by vector-like quarks and chiral \(b'\) quarks, but are too small to deserve much space here) and a more intriguing 2.5-sigma excess in various final states with four top quarks.




This four-top excess is most clearly expressed in Figure 11.




This picture contains four graphs called (a),(b),(c),(d) – for ATLAS to celebrate the four letters of the alphabet ;-) – and they look as follows:



You may see that the solid black curve (which is sometimes the boundary of the red excluded region) sits strictly outside the yellow-green Brazil one-or-two-sigma band. The magnitude of the excess is about 2.5 sigma in all cases.

The excess is interpreted in four different ways. The graph (a) interprets the extra four-top events in terms of some contact interaction linking four stops at the same point. The horizontal axis shows the scale \(\Lambda\) of new physics from which this contact interaction arises. The vertical axis is the coefficient of the quartic interaction.



The graph (b) assumes that the four leptons come from the decay of two pair-produced sgluons whose mass is on the horizontal axis. On the vertical axis, there is some cross section times the branching ratio to four tops.

And the remaining graphs (c) and (d) assume that the four tops arise from two universal extra dimensions (2UED) of the real projective plane (RPP) geometry. The Kaluza-Klein mass scale is on the horizontal axis. The vertical axis depicts the cross section times the branching ratio again. The subgraphs (c) and (d) differ by using the tier \((1,1)\) and \((2,0)+(0,2)\), respectively.

Extra dimensions are cool but I still tend to bet that they will probably be too small and thus inaccessible to the LHC. Moreover, the RPP geometry is probably naive. But it's fun to see something that could be interpreted as positive evidence in favor of some extra dimensions.

I find the sgluons more realistic and truly exciting. They are colored scalar fields ("s" in "sgluon" stands for "scalar") in the adjoint representation of \(SU(3)_{QCD}\), much like gluons, and may be marketed as additional superpartners of gluinos under "another" supersymmetry in theories where the gauge bosons hide the extended, \(\NNN=2\) supersymmetry. Such models predict that the gluinos are Dirac particles, not just Majorana particles as they are in the normal \(\NNN=1\). This possibility has been discussed on this blog many times in recent years because I consider it elegant and clever – and naturally consistent with some aspects of the superstring model building.

Their graph (b) shows that sgluons may be as light as \(830\GeV\) or so.

Previously, CMS only saw a 1-sigma quadruple-top-quark "excess".

Finally, I also want to mention another preprint with light superpartners, ATLAS Z-peaked excess in MSSM with a light sbottom or stop, by Kobakhidze plus three pals which offers a possible explanation for the recent ATLAS Z-peaked 3-sigma excess. They envision something like an \(800\GeV\) gluino and a \(660\GeV\) sbottom.

by Luboš Motl (noreply@blogger.com) at April 20, 2015 06:31 AM

April 19, 2015

Tommaso Dorigo - Scientificblogging

The Era Of The Atom
"The era of the atom" is a new book by Piero Martin and Alessandra Viola - for now the book is only printed in Italian (by Il Mulino), but I hope it will soon be translated in English.

read more

by Tommaso Dorigo at April 19, 2015 11:07 AM

April 18, 2015

ZapperZ - Physics and Physicists

Complex Dark Matter
Don Lincoln has another video on Dark Matter, for those of you who can't enough of these things.



Zz.

by ZapperZ (noreply@blogger.com) at April 18, 2015 01:13 PM

Clifford V. Johnson - Asymptotia

Festival Panel
father and son at LA Times Festival of BooksDon't forget that this weekend is the fantastic LA Times Festival of Books! See my earlier post. Actually, I'll be on a panel at 3:00pm in Wallis Annenberg Hall entitled "Grasping the Ineffable: On Science and Health", with Pat Levitt and Elyn Saks, chaired by the Science writer KC Cole. I've no idea where the conversation is going to go, but I hope it'll be fun and interesting! (See the whole schedule here.) Maybe see you there! -cvj Click to continue reading this post

by Clifford at April 18, 2015 06:07 AM

April 17, 2015

Quantum Diaries

In Defense of Scientism and the Joys of Self-Publishing.

As long-time readers of Quantum Diaries know I have been publishing here for a number of years and this is my 85th and last post[1]. A couple of years ago, I collected the then current collection, titled it “In Defense of Scientism,” after the title of one of the essays, and sent it off to a commercial publisher. Six months later, I got an e-mail from the editor complaining that he had lost the file and only found it by accident, and he somehow inferred that it was my fault. After that experience, it was no surprise he did not publish it.

With all the talk of self-publishing these days, I thought I would give it a try. It is easy, at least compared to finding the Higgs boson! There are a variety of options that give different levels of control, so one can pick and choose preferences – like off an á la carte menu. The simplest form of self-publishing is to go to a large commercial publisher.  The one I found would, for $50.00 USD up front and $12.00 a year, supply print on demand and e-books to a number of suppliers. Not sure that I could recover the costs from the revenue – and being a cheapskate – I decided not to go that route. There are also commissioned alternatives with no upfront costs, but I decided to interact directly with three (maybe four, if I can jump over the humps the fourth has put up) companies.  One of the companies treated their print-on-demand and digital distribution arms as distinct, even to the point of requiring different reimbursement methods. That is the disadvantage of doing it yourself, sorting it all out. The advantage of working directly with the suppliers is more control over the detailed formatting and distribution.

From then on things got fiddly[2], for example, reimbursement. Some companies would only allow payment by electronic fund transfer, others only by check. The weirdest example was one company that did electronic fund transfers unless the book was sold in Brazil or Mexico. In those cases, it is by check but only after $100.00 has been accumulated. One company verified, during account setup, that the fund transfer worked by transferring a small amount, in my case 16 cents. And then of course there are special rules if you earn any money in the USA. For USA earnings there is a 30% withholding tax unless you can document that there is a tax treaty that allows you to get around it. The USA is the only country that requires this. Fine, being an academic, I am used to jumping through hoops.

Next was the question of an International Standard Book Number (ISBN). They are not required but are recommended. That is fine since in Canada you can get them for free. Just as well since each version of the book needs a different number. The paperback needs a different number from the electronic and each different electronic format requires its own number. As I said, it is a good thing it is free. Along with the ISBN, I got a reminder that the Library of Canada requires one copy of each book that sells more than four copies and two copies if it goes over a hundred and of course a separate electronic copy if you publish electronically. Fun, fun, fun[3]. There are other implications of getting you own ISBN number. Some of the publishers would supply an ISBN free of charge but then would put the book out under their own imprint and, in some cases, give wider distribution to those books. But again, getting your own number ultimately gives you more control.

With all this research in hand, it was time to create and format the content. I had the content from the four years’ worth of Quantum Diary posts and all I had to do was put it together and edit for consistency. Actually, Microsoft Word worked quite well with various formatting features to help. I then gave it to my wife to proofread. That was a mistake; she is still laughing at some of the typos. At least there is now an order of magnitude fewer errors. I should also acknowledge the many editorial comments from successive members of the TRIUMF communications team.

The next step was to design the book cover. There comes a point in every researcher’s career when they need support and talent outside of themselves. Originally, I had wanted to superimpose a picture of a model boat on a blackboard of equations. With that vision in mind, I set about the hallways to seek and enroll the talent of a few staff members who could make it happen. After normal working hours, of course. A co-op communication student suggested that the boat be drawn on the blackboard rather than a picture superimposed. The equations were already on a blackboard and are legitimate. The boat was hand drawn by a talented lady in accounting, drawing it first onto an overhead transparency[4] and then projecting it onto a blackboard. A co-op student in the communications team produced the final cover layout according to the various colour codes and margin bleeds dictated by each publisher. For both my own and your sanity, I won’t go into all the details. In the end, I rather like how the cover turned out.

For print-on-demand, they wanted a separate pdf for the cover and for the interior. They sent very detailed instructions so that was no problem. It only took about three tries to get it correct. The electronic version was much more problematic. I wonder if the companies that produce both paper and digital get it right. I suspect not. There is a free version of a program that converts from Word to epub format but the results have some rather subtle errors, like messing up the table of contents. I ended up using one of the digital publisher’s conversion services provided as a free service. If you buy a copy and it looks messed up, I do not want to hear about it.[5] One company (the fourth mentioned above) added a novel twist. I jumped all the hoops related to banking information for wire transfers, did the USA tax stuff and then went to upload the content. Ah, I needed to download a program to upload the content. That should not have been a problem but it ONLY runs on their hardware. The last few times I used their hardware it died prematurely so they can stuff it.

Now, several months after I started the publishing process, I have jumped through all the hoops! All I have to do is lay back and let the money roll in so I can take early retirement. Well, at my age, early retirement is no longer a priori possible but at least I hope to get enough money to buy the people who helped me prepare the book a coffee. So everyone, please rush out and buy a copy. Come on, at least one of you.

As a final point, you may wonder why there is a drawing of a boat on the cover of a book about the scientific method. Sorry, to find out you will have to read the book. But I will give you a hint. It is not that I like to go sailing. I get seasick.

To receive a notice of my blogs and other writing follow me on Twitter: @musquod.

[1] I know, I have promised this before, but his time trust me. I am not like Lucy in the Charlie Brown cartoons pulling the football away.

[2] Epicurus, who made the lack of hassle the greatest good, would not have approved.

[3] Reminds me of an old Beach Boys song.

[4] An old overhead projector was found in a closet.

[5] Hey! We got through an entire conversation about formatting and word processing software without mentioning LaTeX despite me having been the TRIUMF LaTeX guru before I went over to the dark side and administration.

by Byron at April 17, 2015 09:30 PM

Jester - Resonaances

Antiprotons from AMS
This week the AMS collaboration released the long expected measurement of the cosmic ray antiproton spectrum.  Antiprotons are produced in our galaxy in collisions of high-energy cosmic rays with interstellar matter, the so-called secondary production.  Annihilation of dark matter could add more antiprotons on top of that background, which would modify the shape of the spectrum with respect to the prediction from the secondary production. Unlike for cosmic ray positrons, in this case there should be no significant primary production in astrophysical sources such as pulsars or supernovae. Thanks to this, antiprotons could in principle be a smoking gun of dark matter annihilation, or at least a powerful tool to constrain models of WIMP dark matter.

The new data from the AMS-02 detector extend the previous measurements from PAMELA up to 450 GeV and significantly reduce experimental errors at high energies. Now, if you look at the  promotional material, you may get an impression that a clear signal of dark matter has been observed.  However,  experts unanimously agree that the brown smudge in the plot above is just shit, rather than a range of predictions from the secondary production. At this point, there is certainly no serious hints for dark matter contribution to the antiproton flux. A quantitative analysis of this issue appeared in a paper today.  Predicting  the antiproton spectrum is subject to large experimental uncertainties about the flux of cosmic ray proton and about the nuclear cross sections, as well as theoretical uncertainties inherent in models of cosmic ray propagation. The  data and the predictions are compared in this Jamaican band plot. Apparently, the new AMS-02 data are situated near the upper end of the predicted range.

Thus, there is no currently no hint of dark matter detection. However, the new data are extremely useful to constrain models of dark matter. New constraints on the annihilation cross section of dark matter  are shown in the plot to the right. The most stringent limits apply to annihilation into b-quarks or into W bosons, which yield many antiprotons after decay and hadronization. The thermal production cross section - theoretically preferred in a large class of WIMP dark matter models - is in the  case of b-quarks excluded for the mass of the dark matter particle below 150 GeV. These results provide further constraints on models addressing the hooperon excess in the gamma ray emission from the galactic center.

More experimental input will allow us to tune the models of cosmic ray propagation to better predict the background. That, in turn, should lead to  more stringent limits on dark matter. Who knows... maybe a hint for dark matter annihilation will emerge one day from this data; although, given the uncertainties,  it's unlikely to ever be a smoking gun.

Thanks to Marco for comments and plots. 

by Jester (noreply@blogger.com) at April 17, 2015 05:10 PM

astrobites - astro-ph reader's digest

The Milky Way’s Alien Disk and Quiet Past

Title: The Gaia-ESO Survey: A Quiescent Milky Way with no Significant Dark/Stellar Accreted Disk
Authors: G. R. Ruchti, J. I. Read, S. Feltzing, A. M. Serenelli, P. McMillan, K. Lind, T. Bensby, M. Bergemann, M. Asplund, A. Vallenari, E. Flaccomio, E. Pancino, A. J. Korn, A. Recio-Blanco, A. Bayo, G. Carraro, M. T. Costado, F. Damiani, U. Heiter, A. Hourihane, P. Jofre, G. Kordopatis, C. Lardo, P. de Laverny, L. Monaco, L. Morbidelli, L. Sbordone, C. C. Worley, S. Zaggia
First Author’s Institution: Lund Observatory, Department of Astronomy and Theoretical Physics, Lund, Sweden
Status: Accepted for publication in MNRAS

 

 

Galaxy-galaxy collisions can be quite spectacular. The most spectacular occur among galaxies of similar mass, where each galaxy’s competing gravitational forces and comparable reserves of star-forming gas are strong and vast enough to contort the other into bright rings, triply-armed leviathans, long-tailed mice, and cosmic tadpoles. Such collisions, as well as their tamer counterparts between galaxies with large differences in mass—perhaps better described as an accretion event rather than a collision—comprise the inescapable growing pains for adolescent galaxies destined to become the large galaxies adored by generations of space enthusiasts, a privileged group of galaxies to which our home galaxy, the Milky Way, belongs.

What’s happened to the hapless galaxies thus consumed by the Milky Way?  The less massive among these unfortunate interlopers take a while to fall irreversibly deep into the Milky Way’s gravitational clasp, and thus dally, largely unscathed, in the Milky Way’s stellar halo during their long but inevitable journey in.  More massive galaxies feel the gravitationally tug of the Milky Way more strongly, shortening the time it takes the interloper to orbit and eventually merge with the Milky Way as well as making them more vulnerable to being gravitationally ripped apart.  But this is not the only gruesome process the interlopers undergo as they speed towards their deaths.  Galaxies whose orbits cause them to approach the dense disk of the Milky Way are forced to plow through the increasing amounts of gas, dust, stars, and dark matter they encounter.  The disk produces a drag-like force that slows the galaxy down—and the more massive and/or dense the galaxy, the more it’s slowed as it passes through.  Not only so, the disk gradually strips the unfortunate galaxy of the microcosm of stars, gas, and dark matter it nurtured within.  The most massive galaxies—those at least a tenth of the mass of the Milky Way, the instigators of major mergers—accreted by the Milky Way are therefore dragged towards the disk and are forced to deposit their stars, gas, and dark matter preferentially in the disk every time their orbits brings them through the disk.  The stars deposited in the disk in such a manner are called “accreted disk stars,” and the dark matter deposited forms a “dark disk.”

The assimilated stars are thought to compose only a small fraction of the stars in the Milky Way disk. However, they carry the distinct markings of the foreign worlds in which they originated.  The accreted galaxies, lower in mass than the Milky Way, are typically less efficient at forming stars, and thus contain fewer metals and alpha elements produced by supernovae, winds of some old red stars, and other enrichment processes instigated by stars.  Some stars born in the Milky Way, however, are also low in metals and alpha elements (either holdovers formed in the early, less metal- and alpha element-rich days of the Milky Way’s adolescence or formed in regions where gas was not readily available to form stars).  There is one key difference between native and alien stars that provide the final means to identify which of the low metallicity, low alpha-enriched stars were accreted: stars native to the Milky Way typically form in the disk and thus have nearly circular orbits that lie within the disk, while the orbits of accreted stars are more randomly oriented and/or more elliptical (see Figure 1).  Thus, armed with the metallicity, alpha abundance, and kinematics of a sample of stars in the Milky Way, one could potentially pick out the stars among us that have likely fallen from a foreign world.

 

A search for the accreted disk allows us to peer into the Milky Way’s past and provides clues as to the existence of a dark disk—a quest the authors of today’s paper set out to do.  Their forensic tool of choice?  The Gaia-ESO survey, an ambitious ground-based spectroscopic survey to complement Gaia, a space-based mission designed to measure the position and motions of an astounding 1 billion stars with high precision, from which a 3D map of our galaxy can be constructed and our galaxy’s history untangled.  The authors derived metallicities, alpha abundances, and the kinematics of about 7,700 stars from the survey.  Previous work by the authors informed them that the most promising accretion disk candidates would have metallicities no more than about 60% that of the Sun, an alpha abundance less than double that of the Sun, and orbits that are sufficiently non-elliptical and/or out of the plane of the disk.  The authors found about 4,700 of them, confirming the existence of an accreted stellar disk in the Milky Way.

Were any of these stars deposited in spectacular mergers with high-mass galaxies?  It turns out that one can predict the mass of a dwarf galaxy by its average metallicity.  The authors estimated two bounds on the masses of the accreted galaxies: one by assuming that all the stars matching their accreted disk stars criteria were bona fide accreted stars, and the other by throwing out stars that might belong to the disk—those with metallicites greater than 15% of the Sun’s.  The average metallicity of the first subset of accreted stars was about 10 times less than the Sun’s, implying that they came from galaxies with a stellar mass of 10^8.2 solar masses.  Throwing out possible disk stars lowered the average metallicity to about 5% of the Sun’s, implying that they originated in galaxies with a stellar mass of 10^7.4.  In comparison, the Milky Way’s stellar halo is about 10^10 solar masses.  Thus it appears that the Milky Way has, unusually, suffered no recent major mergers, at least since it formed its disk about 9 billion years ago.  This agrees with many studies that have used alternative methods to probe the formation/accretion history of the Milky Way.

The lack of major mergers also implies that the Milky Way likely does not have a disk of dark matter.  This is an important finding for those searching for dark matter signals in the Milky Way, and one which implies that the Milky Way’s dark matter halo is oblate (flattened at the poles) if there is more dark matter than we’ve estimated based on simplistic models that assumed the halos to be perfectly spherical.

 

Figure 1.  The interlopers.

Figure 1. Evidence of a foreign population of stars.  The Milky Way’s major mergers (in which the Milky Way accretes a smaller galaxy with mass greater than a tenth of the Milky Way’s) can deposit stars in our galaxy’s disk.  These plots demonstrate one method to determine which stars may have originated in such a merger: how far from an in-plane circular orbit a star has, as is described by the Jz/Jc parameter.  Stars born in the disk (or “in-situ”) typically have circular orbits that lie in the disk plane—these have Jz/Jc close to one, whereas those that were accreted have lower Jz/Jc.  The plots above were computed for a major merger like that between the Milky Way and its dwarf companion the Large Magellanic Cloud, which has about a tenth the mass of the Milky Way.  If the dwarf galaxy initially has a highly inclined orbit (from left to right, 20, 40, and 60 degree inclinations), then the Jz/Jc of stars deposited in the disk by the galaxy becomes increasingly distinct.

 

Cover image: The Milky Way, LMC, SMC from Cerro Paranal in the Atacama Desert, Chile. [ESO / Y. Beletsky]

 

by Stacy Kim at April 17, 2015 04:18 PM

Clifford V. Johnson - Asymptotia

Southern California Strings Seminar
2011 scss held in doheney library, uscThere's an SCSS today, at USC! (Should have mentioned it earlier, but I've been snowed under... I hope that the appropriate research groups have been contacted and so forth.) The schedule can be found here along with maps. -cvj Click to continue reading this post

by Clifford at April 17, 2015 03:22 PM

Quantum Diaries

Life Underground: Anything Anyone Would Teach Me

Going underground most days for work is probably the weirdest-sounding this about this job. At Laboratori Nazionali del Gran Sasso, we use the lab to be underground because of the protection it affords us from cosmic rays, weather, and other disruptions, and with it we get a shorthand description of all the weirdness of lab life. It’s all just “underground.”

ss17bis

The last kilometer of road before reaching the above-ground labs of LNGS

Some labs for low background physics are in mines, like SURF where fellow Quantum Diariest Sally Shaw works. One of the great things about LNGS is that we’re located off a highway tunnel, so it’s relatively easy to reach the lab: we just drive in. There’s a regular shuttle schedule every day, even weekends. When there are snowstorms that close parts of the highway, the shuttle still goes, it just takes a longer route all the way to the next easy exit. The ride is a particularly good time to start drafting blog posts. On days when the shuttle schedule is inconvenient or our work is unpredictable, we can drive individual cars, provided they’ve passed emissions standards.

The guards underground keep a running list of all the people underground at any time, just like in a mine. So, each time I enter or leave, I give my name to the guards. This leads to some fun interactions where Italian speakers try to pronounce names from all over. I didn’t think too much of it before I got here, but in retrospect I had expected that any name of European etymology would be easy, and others somewhat more difficult. In fact, the difficult names are those that don’t end in vowels: “GladStone” become “Glad-eh-Stone-eh”. But longer vowel-filled names are fine, and easy to pronounce, even though they’re sometimes just waved off as “the long one” with a gesture.

There’s constantly water dripping in the tunnel. Every experiment has to be housed in something waterproof, and gutters line all the hallways, usually with algae growing in them. The walls are coated with waterproofing, more to keep any potential chemical spill from us from getting into the local groundwater than to keep the water off our experiments. When we walk from the tunnel entrance to the experimental halls, the cue for me to don a hardhat is the first drip on my head from the ceiling. Somehow, it’s always right next to the shuttle stop, no matter where the shuttle parks.

And, because this is Italy, the side room for emergencies has a bathroom and a coffee machine. There’s probably emergency air tanks too, but the important thing is the coffee machine, to stave off epic caffeine withdrawal headaches. And of course, “coffee” means “espresso” unless otherwise stated– but that’s another whole post right there.

When I meet people in the neighboring villages, at the gym or buying groceries or whatever, they always ask what an “American girl” is doing so far away from the cities, and “lavoro a Laboratorio Gran Sasso” is immediately understood. The lab is even the economic engine that’s kept the nearest village alive: it has restaurants, hotels, and rental apartments all catering to people from the lab (and the local ski lift), but no grocery stores, ATMs, gyms, or post offices that would make life more convenient for long-term residents.

Every once in a while, when someone mentions going underground, I can’t help thinking back to the song “Underground” from the movie Labyrinth that I saw too many times growing up. Labyrinth and The Princess Bride were the “Frozen” of my childhood (despite not passing the Bechtel test).

Just like Sarah, my adventures underground are alternately shocking and exactly what I expected from the stories, and filled with logic puzzles and funny characters. Even my first night here, when I was delirious with jetlag, I saw a black cat scamper across a deserted medieval street, and heard the clock tower strike 13 times. And just like Wesley, “it was a fine time for me, I was learning to fence, to fight–anything anyone would teach me–” (except that in my case it’s more soldering, cryogenics plumbing, and ping-pong, and less fighting). The day hasn’t arrived where the Dread Pirate Roberts calls me to his office and gives me a professorship.

And now the shuttle has arrived back to the office, so we’re done. Ciao, a dopo.

(ps the clock striking 13 times was because it has separate tones for the hour and the 15-minute chunks. The 13 was really 11+2 for 11:30.)

by Laura Gladstone at April 17, 2015 05:00 AM

April 16, 2015

Quantum Diaries

Building a Neutrino Detector

Ever wanted to see all the steps necessary for building a neutrino detector? Well now you can, check out this awesome video of constructing the near detector for the Double Chooz reactor neutrino experiment in France.

This is the second of two identical detectors near the Chooz nuclear power station in northern France. The experiment, along with competing experiments, already showed that the neutrino mixing angle, Theta_13, was non-zero. A second detector measuring the same flux of neutrinos from the two reactor cores will drastically reduce the final measurement uncertainty.

by jfelde at April 16, 2015 02:04 PM

Symmetrybreaking - Fermilab/SLAC

Seeing the CMS experiment with new eyes

The wonders of particle physics serve as a springboard for a community-building arts initiative at Fermilab.

For many, the aspects of research at the Large Hadron Collider that inspire wonder are the very same that cast it as intellectually remote: ambitious aims about understanding our universe, a giant circular machine in the European underground, mammoth detectors that tower over us like cathedrals.

The power of art lies in the way it bridges the gap between wonder and understanding, says particle physicist and artist Michael Hoch, founder and driving force behind the outreach initiative Art@CMS. Through the creation and consumption of art inspired by the CMS experiment at the LHC, the public and scientific community approach each other in novel ways, allowing one party to better relate to the other and demystifying the science in the process.

“Art can transport information, but it has an additional layer—a way of allowing human beings to get in touch with each other,” says Hoch, who has worked as a scientist on CMS since 2007. “It can reach people who might not be interested in a typical science presentation. They might not feel smart enough, they might be afraid to be wrong. But with art, you cannot be wrong. It’s a personal reflection.”

As the hub for the United States’ participation in the CMS experiment, Fermilab, located outside Chicago, is currently showing the Art@CMS exhibit in the Fermilab Art Gallery. Organized by Fermilab Art Gallery curator Georgia Schwender, the exhibit coincides with the restart of the LHC, which recently fired up again after a two-year break for upgrades. The exhibit is not only a celebration of the LHC restart, it also aims to create connections between artists and CMS physicists in the United States.

Each artist in the Fermilab exhibit collaborated with a CMS scientist in researching his or her work. Emphasizing the collaborative nature of the exhibit, the artwork title cards display both the name of the artist and the collaborating scientist. Drawing on their interactions, the artists created pieces that invite the viewer to see the experiment—the science, the instruments and the people behind it—with new eyes.

Likewise, scientists get a chance to see how others view their search through unfathomably tiny shards of matter to solve the universe’s mysteries.  

“We work with creative people who come up with creative products that as scientists we may never have thought of, expressing our topic in a new way,” Hoch says. “If we can work with people to create different viewpoints on our topic, then we gain a lot.”

That spirit extends to young art students in Fermilab’s backyard. During one intense day at Fermilab in February, Hoch interacted with students from four local high schools. As part of this student outreach effort, called Science&Art@School, the students also learned from Fermilab scientists about what it’s like to work in the world of particle physics and about their own paintings and photographs. And with artist participants in the exhibit, students discussed translating hard-to-picture phenomena into something tangible.

The students were then given an assignment: Create a piece of art based on what they learned about Fermilab and CMS.

Through Science&Art@School, Fermilab caught hold of the imaginations of students who don’t typically visit the laboratory: non-science students, says Fermilab docent Anne Mary Teichert, who organized the effort.

“It was an amazing opportunity. Students were able to push themselves in ways they hadn’t before due to CMS’s generous contribution of funds for art supplies,” she says. “It was intense from the word ‘go.’”

The students’ work sessions resulted in a display of artwork at Water Street Studios in the nearby town of Batavia, Illinois.

“Their artwork reveals not just an abstract understanding—there’s a human dimension,” Teichert says. “They portray how physics resonates with their lives. There’s warmth and thoughtfulness there, and the connections they made were very interesting.”

Student Brandon Shimkus created a cube-shaped sculpture in which each side represents a different area of particle physics, from those we understand well to those for which we have some information but don’t fully grasp, such as dark matter.

“We were given so much information about things we never think about in that kind of way—and then we had to get our information together and make or paint something,” Shimkus says. “It was a challenge to create these things based on ideas you could barely understand on your own—but a fun challenge. If I could do it again, I would, hundreds of times.”

Science&Art@School has hosted 10 workshops, and the Art@CMS exhibit has shown at 25 venues around the world. The programs benefit from the fact that the CMS detector itself—a four-story-high device of intricate symmetry—is as visually fetching as it is a technological masterpiece. A life-size picture of the instrument by Hoch and CERN photographer Maximilien Brice is the centerpiece of Fermilab’s Art@CMS exhibit.

“Enlarging our CMS collaboration with art institutions and engaging artists, we gain a few points for free,” Hoch says. “And we want to fascinate the students with what we’re doing because this concerns them. They can open their eyes and see what we’re doing here is not just something far away—it’s taking place here in their neighborhood. And they are the next generation of us.”

 

Like what you see? Sign up for a free subscription to symmetry!

by Leah Hesla at April 16, 2015 01:00 PM

Matt Strassler - Of Particular Significance

Science Festival About to Start in Cambridge, MA

It’s a busy time here in Cambridge, Massachusetts, as the US’s oldest urban Science Festival opens tomorrow for its 2015 edition.  It has been 100 years since Einstein wrote his equations for gravity, known as his Theory of General Relativity, and so this year a significant part of the festival involves Celebrating Einstein.  The festival kicks off tomorrow with a panel discussion of Einstein and his legacy near Harvard University — and I hope some of you can go!   Here are more details:

—————————-

First Parish in Cambridge, 1446 Massachusetts Avenue, Harvard Square, Cambridge
Friday, April 17; 7:30pm-9:30pm

Officially kicking off the Cambridge Science Festival, four influential physicists will sit down to discuss how Einstein’s work shaped the world we live in today and where his influence will continue to push the frontiers of science in the future!

Our esteemed panelists include:
Lisa Randall | Professor of Physics, Harvard University
Priyamvada Natarajan | Professor of Astronomy & Physics, Yale University
Clifford Will | Professor of Physics, University of Florida
Peter Galison | Professor of History of Science, Harvard University
David Kaiser | Professor of the History of Science, MIT

Cost: $10 per person, $5 per student, Tickets available now at https://speakingofeinstein.eventbrite.com


Filed under: History of Science, Public Outreach Tagged: Einstein, PublicOutreach, relativity

by Matt Strassler at April 16, 2015 12:48 PM

Lubos Motl - string vacua and pheno

LHC: chance to find SUSY quickly
This linker-not-thinker blog post will largely show materials of ATLAS. To be balanced, let me begin with a recommendation for an UCSB article Once More Unto the Breach about the CMS' excitement before the 13 TeV run. Note that the CMS (former?) boss Incandela is from UCSB. They consider the top squark to be their main target.

ATLAS is more into gluinos and sbottoms, it may seem. On March 25th, ATLAS released interesting graphs
Expected sensitivity studies for gluino and squark searches using the early LHC 13 TeV Run-2 dataset with the ATLAS experiment (see also PDF paper)
There are various graphs but let's repost six graphs using the same template.




These six graphs show the expected confidence level \(p_0\) (the probability of a false positive; see the left vertical axis) or \(X\)-sigma (see the dashed red lines with explanations on the right vertical line) that a new superpartner will have been discovered after 1, 2, 5, and 10 inverse femtobarns of collisions.




First, the bottom squark production. The sbottom decays to the neutralino and the bottom quark. If the uncertainty of the Standard Model backgrounds is at 40 percent, the graph looks like this:



You see that if the sbottom is at least 700 GeV heavy, even 10/fb will only get you to 3 sigma. Things improve if the uncertainty in the Standard Model backgrounds is only 20%. Then you get to 4.5 sigma



Now, the production of gluino pairs. Each gluino decays to a neutralino and two quarks. With the background uncertainty 40%, we get this:



With the background uncertainty 20%, things improve:



You see that even a 1350 GeV gluino may be discovered at 5 sigma after 10 inverse femtobarns. I do think that I should win a bet against Adam Falkowski after 10/fb of new data because only 20/fb of the old data has been used in the searches and the "total deadline" of the bet is 30/fb.

Things look similar if there is an extra W-boson among the decay products of each gluino. With the 50% uncertainty of the Standard Model backgrounds, the chances are captured by this graph:



If the uncertainty of the Standard Model backgrounds is reduced to 25%, the discovery could be faster:



If you're happy with 3-sigma hints, they may appear after 10/fb even if the gluino is slightly above 1500 GeV.

The probability is small but nonzero that the gluino or especially the sbottom may be discovered even with 5/fb (if not 2/fb and perhaps 1/fb) of the data.

After 300/fb of collisions, one may see a wider and safer region of the parameter space, see e.g. this CMS study.

by Luboš Motl (noreply@blogger.com) at April 16, 2015 09:17 AM

ATLAS Experiment

From ATLAS around the world: A view from Down Under

While ATLAS members at CERN were preparing for Run 2 during ATLAS week, and eagerly awaiting the beam to re-circulate the LHC, colleagues “down under” in Australia were having a meeting of their own. The ARC Centre of Excellence for Particle Physics at the Terascale (CoEPP) is the hub of all things ATLAS in Australia. Supported by a strong cohort of expert theorists, we represent almost the entirety of particle physics in the nation. It certainly felt that way at our meeting: more than 120 people participated over five days of presentations, discussions and workshops. Commencing at Monash University, our youngest researchers were exposed to a one and a half day summer school. They then joined their lecturers on planes across the Bass Strait to Tasmania where we held our annual CoEPP general meeting.

CoEPP comprises ATLAS collaborators from the University of Adelaide, University of Melbourne and University of Sydney, augmented by theory groups, and joined by theory colleagues from Monash University. CoEPP is enhanced further by international partnerships with investigators in Cambridge, Duke, Freiburg, Geneva, Milano and UPenn to help add a global feel to the strong national impact.

 

Larry Lee of the University of Adelaide talks about his ideas for ATLAS Run 2 physics analyses.

Larry Lee of the University of Adelaide talks about his ideas for ATLAS Run 2 physics analyses.

Ongoing work was presented on precision studies of the Higgs boson, with a primary focus on the process where the Higgs is produced in association with a top-antitop quark pair (ttH) in the multilepton final state and the process where the Higgs decays into two tau leptons (H->tautau). Published results were shared along with some thoughts on how these analyses may proceed looking forward to Run 2. Novel techniques to search for beyond Standard Model processes in Supersymmetry and Exotica were discussed along with analysis results from Run 1 and prospects for discovery for various new physics scenarios. CoEPP physicists are also involved in precision measurements of the top-antitop (ttbar) cross-section and studies of the production and decay of Quarkonia, “flavourless” mesons comprised of a quark and its own anti-quark (Charmonium for instance is made up of charm and anti-charm quarks). It wasn’t just ATLAS physics being discussed though, with time set aside to talk about growing involvement in the plans to upgrade ATLAS (including the trigger system and inner detector) and how we can best leverage national expertise to have a telling impact.

A dedicated talk to outline our national research computing support for ATLAS proved very helpful to many people new to the Australian ATLAS landscape.

CoEPP director, Professor Geoffrey Taylor of the University of Melbourne, in deep discussion during the poster session.

CoEPP director, Professor Geoffrey Taylor of the University of Melbourne, in deep discussion during the poster session.

I was happy to spend time with colleagues from our collaborating institutes and also to meet the new cohort of students/postdocs and researchers who have joined us over the past year. It dawns on me how the Australian particle physics effort is growing, and how we are attract some of the brightest minds to the country. It is exciting to see the expansion and to be able to play a part in growing an effort nationally. The breadth of Australia’s particle physics involvement was demonstrated with a discussion of national involvement in Belle-II and the exciting development of a potential direct dark matter experiment to be situated in Australia at the Stawell Underground Physics Laboratory. The talks rounded out a complete week of interesting physics, good food, a few drinks and a lot of laughs.

As this was the first visit to Hobart for many of us it was particularly pleasing that the meeting dinner was held at the iconic Museum of Old and New Art (MONA), just outside the centre of the city. It proved a fitting setting to frame the exciting discussion, new and innovative ideas, and mixture of reflection and progression that the week contained. Although Australia’s ATLAS members are some of the farthest from CERN there is considerable activity and excitement down under as we plan to partake in a journey of rediscovery of the Standard Model at a new energy, and to see what else nature may have in store for us.

All the CoEPP workshop attendees outside MONA, Hobart.

All the CoEPP workshop attendees outside MONA, Hobart.

 

by Paul Jackson at April 16, 2015 04:36 AM

John Baez - Azimuth

Kinetic Networks: From Topology to Design

Here’s an interesting conference for those of you who like networks and biology:

Kinetic networks: from topology to design, Santa Fe Institute, 17–19 September, 2015. Organized by Yoav Kallus, Pablo Damasceno, and Sidney Redner.

Proteins, self-assembled materials, virus capsids, and self-replicating biomolecules go through a variety of states on the way to or in the process of serving their function. The network of possible states and possible transitions between states plays a central role in determining whether they do so reliably. The goal of this workshop is to bring together researchers who study the kinetic networks of a variety of self-assembling, self-replicating, and programmable systems to exchange ideas about, methods for, and insights into the construction of kinetic networks from first principles or simulation data, the analysis of behavior resulting from kinetic network structure, and the algorithmic or heuristic design of kinetic networks with desirable properties.


by John Baez at April 16, 2015 01:00 AM

April 15, 2015

Symmetrybreaking - Fermilab/SLAC

AMS results create cosmic ray puzzle

New results from the Alpha Magnetic Spectrometer experiment defy our current understanding of cosmic rays.

New results from the Alpha Magnetic Spectrometer experiment disagree with current models that describe the origin and movement of the high-energy particles called cosmic rays.

These deviations from the predictions might be caused by dark matter, a form of matter that neither emits nor absorbs light. But, according to Mike Capell, a senior researcher at the Massachusetts Institute of Technology working on the AMS experiment, it’s too soon to tell.

“It’s a real head scratcher,” Capell says. “We cannot say we are seeing dark matter, but we are seeing results that cannot be explained by the conventional wisdom about where cosmic rays come from and how they get here. All we can say right now is that our results are consistently confusing.”

The AMS experiment is located on the International Space Station and consists of several layers of sensitive detectors that record the type, energy, momentum and charge of cosmic rays. One of AMS’s scientific goals is to search for signs of dark matter.

Dark matter is almost completely invisible—except for the gravitational pull it exerts on galaxies scattered throughout the visible universe. Scientists suspect that dark matter is about five times as prevalent as regular matter, but so far have observed it only indirectly.

If dark matter particles collide with one another, they could produce offspring such as protons, electrons, antiprotons and positrons. These new particles would look and act like the cosmic rays that AMS usually detects, but they would appear at higher energies and with different relative abundances than the standard cosmological models forecast.

“The conventional models predict that at higher energies, the amount of antimatter cosmic rays will decrease faster than the amount of matter cosmic rays,” Capell says. “But because dark matter is its own antiparticle, when two dark matter particles collide, they are just as likely to produce matter particles as they are to produce antimatter particles, so we would see an excess of antiparticles.”

This new result compares the ratio of antiprotons to protons across a wide energy range and finds that this proportion does not drop down at higher energies as predicted, but stays almost constant. The scientists also found that the momentum-to-charge ratio for protons and helium nuclei is higher than predicted at greater energies.

“These new results are very exciting,” says CERN theorist John Ellis. “They’re much more precise than previous data and they are really going to enable us to pin down our models of antiproton and proton production in the cosmos.”

In 2013 and 2014 AMS found a similar result for the proportion of positrons to electrons—with a steep climb in the relative abundance of positrons at about 8 billion electronvolts followed by the possible start of a slow decline around 275 billion electronvolts. Those results could be explained by pulsars spitting out more positrons than expected or accelerating supernovae remnants, Capell says.

“But antiprotons are so much heavier than positrons and electrons that they can’t be generated in pulsars,” he says. “Likewise, supernova remnants would not propagate antiprotons in the way we are observing.”

If this antimatter excess is the result of colliding dark matter particles, physicists should see a definitive bump in the relative abundance of antimatter particles with a particular energy followed by a decline back to the predicted value. Thus far, AMS has not collected enough data to see this full picture.

“This is an important new piece of the puzzle,” Capell says. “It’s like looking at the world with a really good new microscope—if you take a careful look, you might find all sort of things that you don’t expect.”

Theorists are now left with the task of developing better models that can explain AMS’s unexpected results. “I think AMS’s data is taking the whole analysis of cosmic rays in this energy range to a whole new level,” Ellis says. “It’s revolutionizing the field.”

 

Like what you see? Sign up for a free subscription to symmetry!

by Sarah Charley at April 15, 2015 03:03 PM

Lubos Motl - string vacua and pheno

Dark matter self-interaction detected?
Off-topic: My Facebook friend Vít Jedlička (SSO, Party of Free Citizens) established a new libertarian country, Liberland (To Live And Let Live), where the government doesn't get on your nerves. Before he elected himself the president, he had to carefully choose a territory where no one will bother him, where no one would ever start a war; he picked seven squared kilometers in between Serbia and Croatia because these two nations wouldn't dare to damage one another. ;-) There's a catch for new citizens, however: the official language is Czech.
Lots of mainstream writers including BBC, The Telegraph, IBTimes, and Science Daily promote a preprint claiming that they see the non-gravitational forces between the particles that dark matter is composed of:
The behaviour of dark matter associated with 4 bright cluster galaxies in the 10kpc core of Abell 3827 (published in MNRAS)
Richard Massey (Durham) and 22 co-authors have analyzed the galaxy cluster Abell 3827 – which is composed of four very similar galaxies (unusual: they probably got clumped recently) by the new Hubble Space Telescope imaging and by ESO's VLT/MUSE integral field spectroscopy.




The radius of the whole core – which is 1.3 billion light years from us – is 10 kpc. They show that each of the four galaxies has a dark matter halo. But at least one of those halos is offset by 1.62 kpc (plus minus 0.48 kpc, which includes all contributions to the errors, so that it's a 3.4 sigma "certainty").




Such offsets aren't seen in "free" galaxies but when galaxies collide, they may be expected due to the dark matter's collision with itself. With the most straightforward interpretation, the cross section is\[

\frac{\sigma}{m}=(1.7 \pm 0.7)\times 10^{-4}{\rm cm}^2/{\rm g} \times \left(\frac{t}{10^9\,{\rm yrs}}\right)^{-2},

\] where \(t\) is the infall duration. Well, if written in this way, it's only a 2.5-sigma certainty that \(\sigma\neq 0\), but that's probably considered enough for big claims by astrophysicists. (Astrophysicists apparently aren't cosmologists – only cosmology has turned into a hard science in recent 20 years.)

If that claim is right and dark matter interacts not only gravitationally (which is why it was introduced) but also by this additional interaction, it can not only rule out tons of models but also isolate some of the good ones (perhaps with WIMPs such as LSPs such as neutralinos).

The cross section cited above is safely below the recently published upper bound if \(t\) is at least comparable to billions (or tens of millions) years. The \(t\)-dependence of the new result makes it a bit vague – and one could say that similar parameter-, model-dependent claims about the cross section already exist in the literature.

Because of some recent thinking of mine, I should also mention that I think that it's also possible that an adequate MOND theory, with some specific form of nonlinear addition of the forces, could conceivably predict such an offset, too. A week ago, I independently rediscovered Milgrom's justification of MOND using the Unruh effect, after some exchanges with an enthusiastic young Czech female astrophysicist who liked some of my MOND/HOND remarks. For a while, my belief that "MOND and not dark matter" is basically right went above 10%, but it's dropped below 10% again when I was reminded that there are no MOND theories that are really successful with the clusters.



Another dark-matter topic. Today's AMS press conference didn't seem to change the picture much.



Off-topic: If you need to be reminded of the distances inside (our) galaxy, then be aware that the British rapper Stephen Hawking has recorded a cover version of the Monty Python Galaxy Song for you to learn from. Hawking has even hired someone to represent all the stupid, obnoxious, and daft people – you feel that you've had enough – namely Brian Cocks (I have to write it in this way to avoid the copyright traps).

by Luboš Motl (noreply@blogger.com) at April 15, 2015 02:01 PM

Matt Strassler - Of Particular Significance

More on Dark Matter and the Large Hadron Collider

As promised in my last post, I’ve now written the answer to the second of the three questions I posed about how the Large Hadron Collider [LHC] can search for dark matter.  You can read the answers to the first two questions here. The first question was about how scientists can possibly look for something that passes through a detector without leaving any trace!  The second question is how scientists can tell the difference between ordinary production of neutrinos — which also leave no trace — and production of something else. [The answer to the third question — how one could determine this “something else” really is what makes up dark matter — will be added to the article later this week.]

In the meantime, after Monday’s post, I got a number of interesting questions about dark matter, why most experts are confident it exists, etc.  There are many reasons to be confident; it’s not just one argument, but a set of interlocking arguments.  One of the most powerful comes from simulations of the universe’s history.  These simulations

  • start with what we think we know about the early universe from the cosmic microwave background [CMB], including the amount of ordinary and dark matter inferred from the CMB (assuming Einstein’s gravity theory is right), and also including the degree of non-uniformity of the local temperature and density;
  • and use equations for known physics, including Einstein’s gravity, the behavior of gas and dust when compressed and heated, the effects of various forms of electromagnetic radiation on matter, etc.

The output of the these simulations is a prediction for the universe today — and indeed, it roughly has the properties of the one we inhabit.

Here’s a video from the Illustris collaboration, which has done the most detailed simulation of the universe so far.  Note the age of the universe listed at the bottom as the video proceeds.  On the left side of the video you see dark matter.  It quickly clumps under the force of gravity, forming a wispy, filamentary structure with dense knots, which then becomes rather stable; moderately dense regions are blue, highly dense regions are pink.  On the right side is shown gas.  You see that after the dark matter structure begins to form, that structure attracts gas, also through gravity, which then forms galaxies (blue knots) around the dense knots of dark matter.  The galaxies then form black holes with energetic disks and jets, and stars, many of which explode.   These much more complicated astrophysical effects blow clouds of heated gas (red) into intergalactic space.

Meanwhile, the distribution of galaxies in the real universe, as measured by astronomers, is illustrated in this video from the Sloan Digital Sky Survey.   You can see by eye that the galaxies in our universe show a filamentary structure, with big nearly-empty spaces, and loose strings of galaxies ending in big clusters.  That’s consistent with what is seen in the Illustris simulation.

Now if you’d like to drop the dark matter idea, the question you have to ask is this: could the simulations still give a universe similar to ours if you took dark matter out and instead modified Einstein’s gravity somehow?  [Usually this type of change goes under the name of MOND.]

In the simulation, gravity causes the dark matter, which is “cold” (cosmo-speak for “made from objects traveling much slower than light speed”), to form filamentary structures that then serve as the seeds for gas to clump and form galaxies.  So if you want to take the dark matter out, and instead change gravity to explain other features that are normally explained by dark matter, you have a challenge.   You are in danger of not creating the filamentary structure seen in our universe.  Somehow your change in the equations for gravity has to cause the gas to form galaxies along filaments, and do so in the time allotted.  Otherwise it won’t lead to the type of universe that we actually live in.

Challenging, yes.  Challenging is not the same as impossible. But everyone one should understand that the arguments in favor of dark matter are by no means limited to the questions of how stars move in galaxies and how galaxies move in galaxy clusters.  Any implementation of MOND has to explain a lot of other things that, in most experts’ eyes, are efficiently taken care of by cold dark matter.


Filed under: Dark Matter, LHC Background Info Tagged: atlas, cms, DarkMatter, LHC, neutrinos

by Matt Strassler at April 15, 2015 12:35 PM

April 14, 2015

Symmetrybreaking - Fermilab/SLAC

LSST construction begins

The Large Synoptic Survey Telescope will take the most thorough survey ever of the Southern sky.

Today a group will gather in northern Chile to participate in a traditional stone-laying ceremony. The ceremony marks the beginning of construction for a telescope that will use the world’s largest digital camera to take the most thorough survey ever of the Southern sky.

The 8-meter Large Synoptic Survey Telescope will image the entire visible sky a few times each week for 10 years. It is expected to see first light in 2019 and begin full operation in 2022.

Collaborators from the US National Science Foundation, the US Department of Energy, Chile’s Ministry of Foreign Affairs and Comisión Nacional de Investigación Científica y Technológica, along with several other international public-private partners will participate in the ceremony.

“Today, we embark on an exciting moment in astronomical history,” says NSF Director France A. Córdova, an astrophysicist, in a press release. “NSF is thrilled to lead the way in funding a unique facility that has the potential to transform our knowledge of the universe.”

Equipped with a 3-billion-pixel digital camera, LSST will observe objects as they change or move, providing insight into short-lived transient events such as astronomical explosions and the orbital paths of potentially hazardous asteroids. LSST will take more than 800 panoramic images of the sky each night, allowing for detailed maps of the Milky Way and of our own solar system and charting billions of remote galaxies. Its observations will also probe the imprints of dark matter and dark energy on the evolution of the universe.

“We are very excited to see the start of the summit construction of the LSST facility,” says James Siegrist, DOE associate director of science for high-energy physics. “By collecting a unique dataset of billions of galaxies, LSST will provide multiple probes of dark energy, helping to tackle one of science’s greatest mysteries.”

NSF and DOE will share responsibilities over the lifetime of the project. The NSF, through its partnership with the Association of Universities for Research in Astronomy, will develop the site and telescope, along with the extensive data management system. It will also coordinate education and outreach efforts. DOE, through a collaboration led by its SLAC National Accelerator Laboratory, will develop the large-format camera.

In addition, the Republic of Chile will serve as project host, providing (and protecting) access to some of the darkest and clearest skies in the world over the LSST site on Cerro Pachón, a mountain peak in northern Chile. The site was chosen through an international competition due to the pristine skies, low levels of light pollution, dry climate and the robust and reliable infrastructure available in Chile.

“Chile has extraordinary natural conditions for astronomical observation, and this is once again demonstrated by the decision to build this unique telescope in Cerro Pachón,” says CONICYT President Francisco Brieva. “We are convinced that the LSST will bring important benefits for science in Chile and worldwide by opening up a new window of observation that will lead to new discoveries.”

By 2020, 70 percent of the world’s astronomical infrastructure is expected to be concentrated in Chile.

 

Like what you see? Sign up for a free subscription to symmetry!

April 14, 2015 04:39 PM

April 13, 2015

Symmetrybreaking - Fermilab/SLAC

DES releases dark matter map

The Dark Energy Survey's detailed maps may help scientists better understand galaxy formation.

Scientists on the Dark Energy Survey have released the first in a series of dark matter maps of the cosmos. These maps, created with one of the world's most powerful digital cameras, are the largest contiguous maps created at this level of detail and will improve our understanding of dark matter's role in the formation of galaxies. Analysis of the clumpiness of the dark matter in the maps will also allow scientists to probe the nature of the mysterious dark energy, believed to be causing the expansion of the universe to speed up.

The new maps were released today at the April meeting of the American Physical Society in Baltimore, Maryland. They were created using data captured by the Dark Energy Camera, a 570-megapixel imaging device that is the primary instrument for the Dark Energy Survey.

Dark matter, the mysterious substance that makes up roughly a quarter of the universe, is invisible to even the most sensitive astronomical instruments because it does not emit or block light. But its effects can be seen by studying a phenomenon called gravitational lensing – the distortion that occurs when the gravitational pull of dark matter bends light around distant galaxies. Understanding the role of dark matter is part of the research program to quantify the role of dark energy, which is the ultimate goal of the survey.

This analysis was led by Vinu Vikram of Argonne National Laboratory (then at the University of Pennsylvania) and Chihway Chang of ETH Zurich. Vikram, Chang and their collaborators at Penn, ETH Zurich, the University of Portsmouth, the University of Manchester and other DES institutions worked for more than a year to carefully validate the lensing maps.

"We measured the barely perceptible distortions in the shapes of about 2 million galaxies to construct these new maps," Vikram says. "They are a testament not only to the sensitivity of the Dark Energy Camera, but also to the rigorous work by our lensing team to understand its sensitivity so well that we can get exacting results from it."

The camera was constructed and tested at the US Department of Energy's Fermi National Accelerator Laboratory and is now mounted on the 4-meter Victor M. Blanco telescope at the National Optical Astronomy Observatory's Cerro Tololo Inter-American Observatory in Chile. The data were processed at the National Center for Supercomputing Applications at the University of Illinois in Urbana-Champaign.

The dark matter map released today makes use of early DES observations and covers only about three percent of the area of sky DES will document over its five-year mission. The survey has just completed its second year. As scientists expand their search, they will be able to better test current cosmological theories by comparing the amounts of dark and visible matter.

Those theories suggest that, since there is much more dark matter in the universe than visible matter, galaxies will form where large concentrations of dark matter (and hence stronger gravity) are present. So far, the DES analysis backs this up: The maps show large filaments of matter along which visible galaxies and galaxy clusters lie and cosmic voids where very few galaxies reside. Follow-up studies of some of the enormous filaments and voids, and the enormous volume of data, collected throughout the survey will reveal more about this interplay of mass and light.

"Our analysis so far is in line with what the current picture of the universe predicts," Chang says. "Zooming into the maps, we have measured how dark matter envelops galaxies of different types and how together they evolve over cosmic time. We are eager to use the new data coming in to make much stricter tests of theoretical models."

View the Dark Energy Survey analysis.

 

Fermilab published a version of this article as a press release.

 

Like what you see? Sign up for a free subscription to symmetry!

April 13, 2015 04:39 PM

Subscriptions

Feeds

[RSS 2.0 Feed] [Atom Feed]


Last updated:
April 28, 2015 11:36 AM
All times are UTC.

Suggest a blog:
planet@teilchen.at