Particle Physics Planet


January 26, 2015

arXiv blog

How A Box Could Solve The Personal Data Conundrum

Software known as a Databox could one day both safeguard your personal data and sell it, say computer scientists.

January 26, 2015 11:31 PM

Christian P. Robert - xi'an's og

the density that did not exist…

On Cross Validated, I had a rather extended discussion with a user about a probability density

f(x_1,x_2)=\left(\dfrac{x_1}{x_2}\right)\left(\dfrac{\alpha}{x_2}\right)^{x_1-1}\exp\left\{-\left(\dfrac{\alpha}{x_2}\right)^{x_1} \right\}\mathbb{I}_{\mathbb{R}^*_+}(x_1,x_2)

as I thought it could be decomposed in two manageable conditionals and simulated by Gibbs sampling. The first component led to a Gumbel like density

g(y|x_2)\propto ye^{-y-e^{-y}} \quad\text{with}\quad y=\left(\alpha/x_2 \right)^{x_1}\stackrel{\text{def}}{=}\beta^{x_1}

wirh y being restricted to either (0,1) or (1,∞) depending on β. The density is bounded and can be easily simulated by an accept-reject step. The second component leads to

g(t|x_1)\propto \exp\{-\gamma ~ t \}~t^{-{1}/{x_1}} \quad\text{with}\quad t=\dfrac{1}{{x_2}^{x_1}}

which offers the slight difficulty that it is not integrable when the first component is less than 1! So the above density does not exist (as a probability density).

What I found interesting in this question was that, for once, the Gibbs sampler was the solution rather than the problem, i.e., that it pointed out the lack of integrability of the joint. (What I found less interesting was that the user did not acknowledge a lengthy discussion that we had previously about the Gibbs implementation and that he erased, that he lost interest in the question by not following up on my answer, a seemingly common feature of his‘, and that he did not provide neither source nor motivation for this zombie density.)


Filed under: Kids, R, Statistics, University life Tagged: cross validated, Gibbs sampling, Gumbel distribution, improper posteriors, zombie density

by xi'an at January 26, 2015 11:15 PM

astrobites - astro-ph reader's digest

Evryscope, Greek for “wide-seeing”
Title: Evryscope Science: Exploring the Potential of All-Sky Gigapixel-Scale Telescopes

Authors: Nicholas M. Law et al.

First Author’s Institution: University of North Carolina at Chapel Hill

How fantastic would it be to image the entire sky, every few minutes, every night, for a series of years? The science cases for such surveys —in today’s paper they are called All-Sky Gigapixel Scale Surveys— are numerous, and span a huge range of astronomical topics. Just to begin with, such surveys could: detect transiting giant planets, sample Gamma Ray Bursts and nearby Supernovae, and a wealth of other rare and/or unexpected transient events that are further described in the paper.

Evryscope is a telescope that sets out to take such a minute-by-minute movie of the sky accessible to it. It is designed as an array of extremely wide-angle telescopes, contrasting the traditional meaning of the word “tele-scope” (Greek for “far-seeing”) by Evryscope’s emphasis on extremely wide angles (“Evryscope” is Greek for “wide-seeing”). The array is currently being constructed by the authors at the University of North Carolina at Chapel Hill, and is scheduled to be deployed at the Cerro Tololo Inter-American Observatory (CTIO) in Chile later this year.

But wait, aren’t there large sky surveys out there that are already patrolling the sky a few times a week? Yes, there are! But a bit differently. There is for example the tremendously successful Sloan Digital Sky Survey (SDSS— see Figure 1, and read more about SDSS-related Astrobites here, here, here), which has paved the way for numerous other surveys such as Pan-STARRS, and the upcoming Large Synoptic Survey Telescope (LSST). These surveys are all designed around a similar concept: they utilize a single large-aperture telescope that repeatedly observes few-degree-wide fields to achieve deep imaging. Then the observations are tiled together to cover large parts of the sky several times a week.

SDSS_telescope_new

Figure 1: The Sloan Digital Sky Survey Telescope, a 2.5m telescope that surveys large areas of the available sky a few times a week. The Evryscope-survey concept is a bit different, valuing the continuous coverage of almost the whole available sky over being able to see faint far-away objects. Image from the SDSS homepage.

The authors of today’s paper note that surveys like the SDSS are largely optimized to finding day-or-longer-type events such as supernovae —and are extremely good at that— but are not, however, sensitive to the very diverse class of even shorter-timescale transient events (remembering the list of example science cases above). Up until now, such short-timescale events have generally been studied with individual small telescopes staring at single, limited, fields of view. Expanding on this idea, the authors then propose the Evryscope as an array of small telescopes arranged so that together they can survey the whole available sky minute-by-minute. In contrast to SDSS-like surveys, an Evryscope-like survey will not be able to detect targets nearly as faint as SDSS-like surveys can, but rather focuses on the continuous monitoring of the brightest objects it can see.

evryscope_mount

Figure 2: The currently under-construction Evryscope, showing the 1.8m diameter custom-molded dome. The dome houses 27 individual 61mm aperture telescopes, each of which have their own CCD detector. Figure 1 from the paper.

Evryscope: A further description

Evryscope is designed as an array of 27 61mm optical telescopes, arranged in a custom-molded fiberglass dome, which is mounted on an off-the-shelf German Equatorial mount (see Figure 2). Each telescope has its own 29 MPix CCD detector, adding up to a total detector size of 0.78GPix! The authors refer to Evryscope’s observing strategy as a “ratcheting survey”, as it goes like this: the dome follows the instantaneous field of view (see Figure 3, left) by rotating the dome ever so slowly to compensate for Earth’s rotation rate, taking 2 minute exposures back-to-back for two hours, and then reset and repeat (see Figure 3, right). This ratcheting approach enables Evryscope to image essentially every part of the visible sky for at least 2 hours every night!

evryscope_comb

Figure 3: Evryscope sky coverage (blue), for a mid-latitude Northern-hemisphere site (30°N), showing the SDSS DR7 photometric survey (red) for scale. Left: Instantaneous Evryscope coverage (8660 sq. deg.), including the individual camera fields-of-view (skewed boxes). Right: The Evryscope sky coverage over one 10-hour night. The intensity of the blue color corresponds to the length of continuous coverage (between 2 and 10 hours, in steps of 2 hours) provided by the ratcheting survey, covering a total of 18400 sq.deg. every night. Figures 3 (left) and 5 (right) from the paper.

With its Gigapixel-scale detector, Evryscope will gather large amounts of data, amounting to about 100 TB of compressed FITS images per year! The data will be stored and analyzed on site. The pipeline will be optimized to provide real-time detection of interesting transient events, with rapid retrieval of compressed associated images, allowing for rapid follow-up with other more powerful telescopes. Real-time analysis of the sheer amount of data that Gigapixel-scale systems like Evryscope create would have been largely unfeasible just a few years ago. The rise of consumer digital imaging, ever increasing computing power, and decreasing storage costs have however made the overall cost manageable (a few million dollars much less than one million dollars!) with current technology.

Outlook

The Evryscope array is scheduled to see first light later this year in CTIO in Chile, where it will start to produce a powerful minute-by-minute data-set on transient events happening in its field of view, which until now have not been feasible to capture. But it won’t see the whole sky, just the sky it can see from its location on Earth. So then, why stop there, why not reuse the design and expand? Indeed, this is what the authors are already thinking —see whitepaper about the “Antarctic-Evryscope” on the group’s website. And who knows, maybe soon after we will have an Evryscope at *evry* major observatory in the world working together to record a continuous movie of the whole sky?

by Gudmundur Stefansson at January 26, 2015 10:35 PM

Peter Coles - In the Dark

Lines on the Death of Demis Roussos

After all the sound and fury accompanying yesterday elections in Greece there’s one item of much sadder news. The legendary Demis Roussos has passed away. I can’t think of him without thinking of Abigail’s Party:

 

So, farewell then
Demis Roussos.

“For ever
And ever
And ever
And ever
You’ll
beeeeeee
the One!”

You sang.

But, alas,
Nothing
Ever
Lasts
For ever
And ever
And ever
And ever.
And now you’ve
Gone.

But at least
Alison Steadman
liked you.

And so did
Keith’s Mum.

 by Peter Coles (aged 51 ½).


by telescoper at January 26, 2015 06:57 PM

Emily Lakdawalla - The Planetary Society Blog

At last! A slew of OSIRIS images shows fascinating landscapes on Rosetta's comet
The first results of the Rosetta mission are out in Science magazine. The publication of these papers means that the OSIRIS camera team has finally released a large quantity of closeup images of comet Churyumov-Gerasimenko, taken in August and September of last year. I explain most of them, with help from my notes from December's American Geophysical Union meeting.

January 26, 2015 05:50 PM

Lubos Motl - string vacua and pheno

A reply to an anti-physics rant by Ms Hossenfelder
S.H. of Tokyo University sent me a link to another text about the "problems with physics". The write-up is one month old and for quite some time, I refused to read it in its entirety. Now I did so and the text I will respond to is really, really terrible. The author is Sabine Hossenfelder and the title reads
Does the scientific method need revision?

Does the prevalence of untestable theories in cosmology and quantum gravity require us to change what we mean by a scientific theory?
To answer this, No. Only people who have always misunderstood how science works – at least science since the discoveries by Albert Einstein – need to change their opinions what a scientific theory is and how it is being looked for. Let me immediately get to the propositions in the body of the write-up and respond.




Here we go:
Theoretical physics has problems.
Theoretical physics solves problems and organizes ideas about how Nature works. Anything may be substituted for "it" to the sentence "it has problems" but the only reason why someone would substitute "theoretical physics" into this sentence is that he or she hates science and especially the most remarkable insights that physics discovered in recent decades.




The third sentence says:
But especially in high energy physics and quantum gravity, progress has basically stalled since the development of the standard model in the mid 70s.
This is an absolutely preposterous claim. First, since the mid 1970s, there have been several important experimental discoveries – like the discoveries of the W-bosons, Z-bosons, Higgs boson, top quark, neutrino oscillations; non-uniformities of the cosmic microwave background, the cosmological constant, and so on, and so on.

But much more shockingly, there have been long sequences of profound and amazing theoretical discoveries, including supersymmetry, supergravity, superstring theory, its explanation for the black hole thermodynamics, D-branes, dualities, holography, AdS/CFT correspondence, AdS/mundane_physics correspondences, and so on, and so on. Many of these results deservedly boast O(10,000) citations – like AdS/CFT – which actually sometimes beats the figures of the Standard Model. Which of those discoveries are more important is debatable and the citation counts can't be treated dogmatically but some of the recent discoveries are unquestionably in the "same league" as the top papers that have led to the Standard Model.

It is silly not to consider these amazing advances "fully important" just because they're primarily theoretical in character. The W-bosons, Z-bosons, Higgs bosons etc. have been believed to exist since the 1960s even though they were also discovered in 1983 or 2012, respectively, and they were "just a theory" for several previous decades. The beta-decay was known by competent particle physicists to be mediated by the W-boson even though no W-boson had been seen by 1983. Exactly analogously, we know that the gravitational force (and other forces) is mediated by closed strings even though we haven't seen a fundamental string yet. The situations are absolutely analogous and people claiming that it is something "totally different" are hopelessly deluded.

One can become virtually certain about certain things long before the thing is directly observed – and that is true not only for particular species of bosons but also for the theoretical discoveries since the mid 1970s that I have mentioned.
Yes, we’ve discovered a new particle every now and then. Yes, we’ve collected loads of data.
In the framework of quantum field theory, almost all discoveries can be reduced to the "discovery of a new particle". So if someone finds such a discovery unimpressive, he or she simply shows his or her disrespect for the whole discipline. But the discoveries were not just discoveries of new particles.
But the fundamental constituents of our theories, quantum field theory and Riemannian geometry, haven’t changed since that time.
That's completely untrue. Exactly since the 1970s, state-of-the-art physics has switched from quantum field theory and Riemannian geometry to string theory as its foundational layer. People have learned that this more correct new framework is different from the previous approximate ones; but from other viewpoints, it is exactly equivalent thanks to previously overlooked relationships and dualities.

Laymen and physicists who are not up to their job may have failed to notice that a fundamental paradigm shift has taken place in physics since the mid 1970s but that can't change the fact that this paradigm shift has occurred.
Everybody has their own favorite explanation for why this is so and what can be done about it. One major factor is certainly that the low hanging fruits have been picked, [experiments become hard, relevant problems are harder...].

Still, it is a frustrating situation and this makes you wonder if not there are other reasons for lack of progress, reasons that we can do something about.
If Ms Hossenfelder finds physics this frustrating, she should leave it – and after all, her bosses should do this service for her, too. Institutionalized scientific research has also become a part of the Big Government and it is torturing lots of people who would love to be liberated but they still think that to pretend to be scientists means to be on a great welfare program. Niels Bohr didn't establish Nordita as another welfare program, however, so he is turning in his grave.

Ms Hossenfelder hasn't written one valuable paper in her life but her research has already cost the taxpayers something that isn't far from one million dollars. It is not shocking that she tries to pretend that there are no results in physics – in this way, she may argue that she is like "everyone else". But she is not. Some people have made amazing or at least pretty interesting and justifiable discoveries, she is just not one of those people. She prefers to play the game that no one has found anything and the taxpayers are apparently satisfied with this utterly dishonest demagogy.

If you have the feeling that the money paid to the research is not spent optimally, you may be right but you may want to realize that it's thanks to the likes of Hossenfelder, Smolin, and others who do nothing useful or intellectually valuable and who are not finding any new truths (and not even viable hypotheses) about Nature.
Especially in a time when we really need a game changer, some breakthrough technology, clean energy, that warp drive, a transporter! Anything to get us off the road to Facebook, sorry, I meant self-destruction.
We don't "need" a game changer now more than we needed it at pretty much any moment in the past (or we will need it in the future). People often dream about game changers and game changers sometimes arrive.

We don't really "need" any breakthrough technology and we certainly don't need "clean energy" because we have lots of clean energy, especially with the rise of fracking etc.

We may "need" warp drive but people have been expressing similar desires for decades and competent physicists know that warp drive is prohibited by the laws of relativity.

And we don't "need" transporters – perhaps the parties in the Ukrainian civil war need such things.

Finally, we are more resilient and further from self-destruction than we were at pretty much any point in the past. Also, we don't need to bash Facebook which is just another very useful pro-entertainment website. It is enough to ignore Facebook if you think it's a waste of time – I am largely doing so ;-) but I still take the credit for having brought lots of (more socially oriented) people who like it to the server.

So every single item that Hossenfelder enumerates in her list "what we need" is crap.
It is our lacking understanding of space, time, matter, and their quantum behavior that prevents us from better using what nature has given us.
This statement is almost certainly untrue, too. A better understanding of space, time, and matter – something that real physicists are actually working on, and not just bashing – will almost certainly confirm that warp drives and similar things don't exist. Better theories will give us clearer explanations why these things don't exist. There may be some "positive applications" of quantum gravity but right now, we don't know what they could be and they are surely not the primary reason why top quantum gravity people do the research they do.

The idea that the future research in quantum gravity will lead to practical applications similar to warp drive is a belief, a form of religion, and circumstantial evidence (and sometimes almost rigorous proofs) makes this belief extremely unlikely.
And it is this frustration that lead people inside and outside the community to argue we’re doing something wrong, ...
No, this is a lie, too. As I have already said, physics bashers are bashing physics not because of frustration that physics isn't making huge progress – it obviously *is* making huge progress. Physics bashers bash physics in order to find excuses for their own non-existent or almost non-existent results in science – something I know very well from some of the unproductive physicists in Czechia whom the institutions inherited from the socialist era. They try to hide that they are nowhere near the top physicists – and most of them are just useless parasites. And many listeners buy these excuses because the number of incredibly gullible people who love to listen to similar conspiracy theories (not so much to science) is huge. And if you combine this fact with many ordinary people's disdain for mathematics etc., it is not surprising that some of these physics bashers may literally make living out of their physics bashing and nothing else.
The arxiv categories hep-th and gr-qc are full every day with supposedly new ideas. But so far, not a single one of the existing approaches towards quantum gravity has any evidence speaking for it.
This is complete rubbish. The tens of thousands of papers are full of various kinds of evidence supporting this claim or another claim about the inner workings of Nature. In particular, the scientific case for string theory as the right framework underlying the Universe is completely comparable to the case for the Higgs boson in the 1960s. The Higgs boson was discovered in 2012, 50 years after the 1960s, but that doesn't mean that adequate physicists in the 1960s were saying that "there wasn't any evidence supporting that theory".

People who were not embarrassed haven't said such a thing and people who are not embarrassing themselves are not saying a similar thing about string theory – and other things – today.
To me the reason this has happened is obvious: We haven’t paid enough attention to experimentally testing quantum gravity. One cannot develop a scientific theory without experimental input. It’s never happened before and it will never happen. Without data, a theory isn’t science. Without experimental test, quantum gravity isn’t physics.
None of these statements is right. We have paid more than enough attention to "experimental quantum gravity". It is a vastly overstudied and overfunded discipline. All sensible physicists realize that it is extremely unlikely that we will directly observe some characteristic effects of quantum gravity in the near future. The required temperatures are around \(10^{32}\) kelvins, the required distances are probably \(10^{-35}\) meters, and so on. Max Planck has known the values of these natural units since the late 19th century.

So we have paid more than enough attention to this strategy.

It is also untrue that the progress in theoretical physics since the mid 1970s has been done "without experimental input". The amount of data we know about many things is huge. To a large extent, the knowledge of one or two basic experiments showing quantum mechanics and one or two experiments testing gravity is enough to deduce a lot. General relativity, quantum mechanics, and string theory largely follow from (subsets of) these several elementary experiments.

On the other hand, it is not true that scientific progress cannot be made without (new) experimental input. Einstein found special relativity even though he wasn't actively aware of the Michelson-Morley experiment. He could have deduced the whole theory independently of any experiments. Experiments had previously been used to construct e.g. Maxwell's equations but Einstein didn't deal with them directly. Einstein only needed the equations themselves. More or less the same thing occurred 10 years later when he discovered general relativity. But the same approach based on "nearly pure thought" has also be victorious in the case of Bekenstein's and Hawking's black hole thermodynamics, string theory, and in some other important examples.

So the idea that one can't find important things without some new experiments – excluding experiments whose results are old and generally known – is obviously untrue. Science haters can say that this or that important part of science "is not science" or "is not physics" but that doesn't change anything about the fact that certain insights about Nature may be found and have been found and supported by highly convincing bodies of evidence in similar ways. Only simpletons may pay attention to demagogue's proclamation that "something is not science". This emotional scream is not a technical argument for or against any scientifically meaningful proposition.

I will omit another repetitive paragraph where Hossenfelder advocates "experimental quantum gravity". She thinks that tons of effects are easily observable because she's incompetent.
Yes, experimental tests of quantum gravity are farfetched. But if you think that you can’t test it, you shouldn’t put money into the theory either.
This is totally wrong. It is perfectly sensible to pay almost all of the quantum gravity research money to the theorists because whether someone likes it or not, quantum gravity is predominantly a theoretical discipline. It is about people's careful arguments, logical thoughts, and calculations that make our existing knowledge fit together more seamlessly than before.

In particular, the goal of quantum gravity is to learn how space and time actually work in our Universe, a world governed by the postulates of quantum mechanics. Quantum gravity is not – and any discipline of legitimate science is not – a religious cult that trains its followers to believe in far-fetched theories. The idea that you may observe completely new effects of quantum gravity (unknown to the theorists) in your kitchen is far-fetched and that really means that it is extremely unlikely. And its being extremely unlikely is the rational reason why almost no money is going into this possibility. This justification can't be "beaten" by the ideological cliché that everything connected with experiments in the kitchen should have a priority because it's "more scientific".

It's not more scientific. A priori, it is equally scientific. A posteriori, it is less scientific because arguments rooted in science almost reliably show that such new quantum gravity effects in the kitchen are very unlikely – some of them are rather close to supernatural phenomena such as telekinesis. So everything that Ms Hossenfelder says is upside down once again.
And yes, that’s a community problem because funding agencies rely on experts’ opinion. And so the circle closes.
Quantum gravity theorists and string theorists are getting money because they do absolutely amazing research, sometimes make a medium-importance discovery, and sometimes a full-fledged breakthrough. And if or when they don't do such a thing for a few years, they are still exceptional people who are preserving and nurturing the mankind's cutting-edge portrait of the Universe. The folks in the funding agencies are usually less than full-fledged quantum gravity or string theorists. But as long as the system at least barely works, they still know enough – much more than an average human or Ms Hossenfelder knows – so they may see that something fantastic is going on here or there even though they can't quite join the research. That's true for various people making decisions in government agencies but that's true e.g. for Yuri Milner, too.

As Ms Hossenfelder indicated, the only way how this logic may change – and yes, I think it is unfortunately changing to some extent – is that the funding decisions don't depend on expert opinion (and on any people connected with knowledge and progress in physics) at all. The decisions may be done by people who hate physics and who have no idea about contemporary physics. The decisions may depend on people who are would-be authority and pick winners and losers by arbitrarily stating that "this is science" and "this is not science". I don't have to say how such decisions (would?) influence the research.
To make matters worse, philosopher Richard Dawid has recently argued that it is possible to assess the promise of a theory without experimental test whatsover, and that physicists should thus revise the scientific method by taking into account what he calls “non-empirical facts”.
Dawid just wrote something that isn't usual among the prevailing self-appointed "critics and philosophers of physics" but he didn't really write anything that would be conceptually new. At least intuitively, physicists like Dirac or Einstein have known all these things for a century. Of course that "non-empirical facts" have played a role in the search for the deeper laws of physics and this role became dramatic about 100 years ago.
Dawid may be confused on this matter because physicists do, in practice, use empirical facts that we do not explicitly collect data on. For example, we discard theories that have an unstable vacuum, singularities, or complex-valued observables. Not because this is an internal inconsistency — it is not. You can deal with this mathematically just fine. We discard these because we have never observed any of that. We discard them because we don’t think they’ll describe what we see. This is not a non-empirical assessment.
This was actually the only paragraph I fully read when I replied to S.H. in Tokyo for the first time – and this paragraph looked "marginally acceptable" to me from a certain point of view.

Well, the paragraph is only solving a terminological issue. Should the violation of unitarity or instability of the Universe that would manifest itself a Planck time after the Big Bang, or something like that be counted as "empirical" or "non-empirical" input? I don't really care much. It's surely something that most experts consider consistency conditions, like Dawid.

We may also say that we "observe" that the Universe isn't unstable and doesn't violate unitarity. But this is a really tricky assertion. Our interpretation of all the observations really assumes that probabilities are non-negative and add to 100%. Whatever our interpretation of any experiment is, it must be adjusted to this assumption. So it's a pre-empirical input. It follows from pure logic. Also, some instabilities and other violations of what we call "consistency conditions" (e.g. unitarity) may be claimed to be very small and therefore hard to observe. But some of these violations will be rejected by theorists, anyway, even if they are very tiny because they are violations of consistency conditions.

I don't really care about the terminology. What's important in practice is that these "consistency conditions" cannot be used as justifications for some new fancy yet meaningful experiments.
A huge problem with the lack of empirical fact is that theories remain axiomatically underconstrained.
The statement is surely not true in general. String theory is 100% constrained. It cannot be deformed at all. It has many solutions but its underlying laws are totally robust.
This already tells you that the idea of a theory for everything will inevitably lead to what has now been called the “multiverse”. It is just a consequence of stripping away axioms until the theory becomes ambiguous.
If the multiverse exists, and it is rather likely that it does, it doesn't mean that the laws of physics are ambiguous. It just means that the world is "larger" and perhaps has more "diverse subregions" than previously thought. But all these regions follow the same unambiguous laws of physics – laws of physics we want to understand as accurately as possible.

The comment about "stripping away axioms" is tendentious, too, because it suggests that there is some "a priori known" number of axioms that is right. But it's not the case. If someone randomly invents a set of axioms, it may be too large (overconstrained) or too small (underconstrained). In the first case, some axioms should be stripped away, in the latter case, some axioms should be added. But the very fact that a theory predicts or doesn't predict the multiverse doesn't imply that its set of axioms is underconstrained or overconstrained.

For example, some theories of inflation predict that inflation is not eternal and no multiverse is predicted; other, very analogous theories (that may sometimes differ by values of parameters only!) predict that inflation is eternal and the Universe emerges. So Hossenfelder's claim that the multiverse is linked with "underconstrained axioms" is demonstrably incorrect, too.
Somewhere along the line many physicists have come to believe that it must be possible to formulate a theory without observational input, based on pure logic and some sense of aesthetics. They must believe their brains have a mystical connection to the universe and pure power of thought will tell them the laws of nature.
There is nothing mystical about this important mode of thinking in theoretical physics. It's how special relativity was found, much like general relativity, the idea that atoms exist, the idea that the motion of atoms is linked to heat, not to mention the Dirac equation, gauge theories, and many other things. A large fraction of theoretical physicists have made their discovery by optimizing the "beauty" of the candidate laws of physics. People like Dirac have emphasized the importance of the mathematical beauty in the search for the laws of physics all the time, and for a good reason.



That's the most important thing Dirac needed to write on a Moscow blackboard.

And the more recent breakthroughs in physics we consider, the greater role such considerations have played (and will play). And the reason why this "mathematical beauty" works isn't supernatural – even though many of us love to be amazed by this power of beautiful mathematics and this meme is often sold to the laymen, too. One may give Bayesian explanations why "more beautiful" laws are more likely to hold than generic, comparable, but "less beautiful" competitors. Bayesian inference dictates to assign comparable prior probabilities to competing hypotheses and because the mathematically beautiful theories have a smaller number of truly independent assumptions and building blocks, and therefore a smaller number of ways how to invent variations, their prior probability won't be split to so many "sub-hypotheses". Moreover, as we describe deeper levels of reality, the risk that an inconsistency emerges is high and ever higher, and the "not beautiful theories" are increasingly likely to lead to one kind of an inconsistency or another.

Sabine Hossenfelder's denial of this principle only shows her lack of familiarity with physics, its logic, and its history.
You can thus never arrive at a theory that describes our universe without taking into account observations, period.
Whether someone has ever found important things without "any observations" is questionable. But it is still true and important that a good theorist may need 1,000 times less empirical data than a worse theorist to find and write down a correct theory, and a bad theorist will not find the right theory with arbitrarily large amounts of data! And that's the real "period", that's why the mathematical beauty is important for good theoretical physicists – and the others have almost no chance to make progress these days.
The attempt to reduce axioms too much just leads to a whole “multiverse” of predictions, most of which don’t describe anything we will ever see.
I have already said that there is no relationship between the multiverse and the underdeterminedness of the sets of axioms.
(The only other option is to just use all of mathematics, as Tegmark argues. You might like or not like that; at least it’s logically coherent. But that’s a different story and shall be told another time.)
But these Tegmark's comments are purely verbal philosophical remarks without any scientific content. They don't imply anything for observations, not even in principle. For this reason, they have nothing to do with physical models of eternal inflation or the multiverse or even specific compactifications of string/M-theory which are completely specific theories about Nature and the observations of it.
Now if you have a theory that contains more than one universe, you can still try to find out how likely it is that we find ourselves in a universe just like ours. The multiverse-defenders therefore also argue for a modification of the scientific method, one that takes into account probabilistic predictions.
Most people writing papers about the multiverse – more precisely, papers evoking the anthropic principle – use the probability calculus incorrectly. But the general statement that invoking probabilities in deductions of properties of Nature is a "modification of the scientific method" is a total idiocy. The usage of probabilities was not only "allowed" in the scientific method for quite some time. In fact, science could have never been done without probabilities at all! All of science is about looking at the body of our observations and saying which explanation is more likely and which explanation is less likely.



And of course that a theory with a "larger Universe than previously thought" and perhaps with some extra rules to pinpoint "our location" in this larger world is an OK competitor to describe the Universe a priori.

Every experimenter needs to do some calculations involving probabilities – probabilities that a slightly unexpected result is obtained by chance, and so on – all the time. Ms Hossenfelder just doesn't have a clue what science is.
In a Nature comment out today, George Ellis and Joe Silk argue that the trend of physicists to pursue untestable theories is worrisome.
Please not again.
I agree with this, though I would have said the worrisome part is that physicists do not care enough about the testability — and apparently don’t need to care because they are getting published and paid regardless.
I don't get paid a penny but I am still able to see that the people whose first obsession is "testability" are either crackpots or third-class physicists such as Ms Hossenfelder who don't have an idea what they are talking about.

The purpose of science is to find the truth about Nature. Easy testability (in practice) means that there exists a procedure, an experimental procedure, that may accelerate the process by which we decide whether the hypothesis is true or not. But the testability doesn't actually make the hypothesis true (or more true) and scientists are looking for correct theories, not falsifiable theories, and it's an entirely different thing.

One could say that the less falsifiable a theory is, the better. We are looking for theories that withstand tests. So they won't be falsified anytime soon! A theory that has already resisted some attempts to be falsified is in a better shape than a theory that has already been falsified. The only "philosophical" feature of this kind that is important is that the propositions made by the theory are scientifically meaningful – i.e. having some non-tautological observable consequences in principle. If this is satisfied, the hypothesis is perfectly scientific and its higher likelihood to be falsified soon may only hurt. If one "knows" that a hypothesis is likely to die after a soon-to-be-performed experiment, it's probably because he "knows" that the hypothesis is actually unlikely.
See, in practice the origin of the problem is senior researchers not teaching their students that physics is all about describing nature. Instead, the students are taught by example that you can publish and live from outright bizarre speculations as long as you wrap them into enough math.
Maybe this is what Ms Hossenfelder has learned from her superiors such as Mr Smolin but no one is teaching these things at good places – like those I have been affiliated with.
I cringe every time a string theorist starts talking about beauty and elegance.
Because you are a stupid cringing crackpot.
Whatever made them think that the human sense for beauty has any relevance for the fundamental laws of nature?
The history of physics, especially 20th century physics, plus the Bayesian arguments showing that more beautiful theories are more likely. The sense of beauty used by these physicists – one that works so often – is very different from the sense of beauty used by average humans or average women in some respects. But it also has some similar features so it is similar in other respects.

Even more important is to point out that this extended discussion about "strings and beauty" is a straw man because almost no arguments referring to "beauty" can be found in papers on string theory. Many string theorists would actually disagree that "beauty" is a reason why they think that the theory is on the right track. Ms Hossenfelder is basically proposing illogical connections between her numerous claims, all of which happen to be incorrect.

I will omit one paragraph repeating content-free clichés that science describes Nature. Great, I agree that science describes Nature.
Call them mathematics, art, or philosophy, but if they don’t describe nature don’t call them science.
The only problem is that all theories that Ms Hossenfelder has targeted for her criticism do describe Nature and are excellent and sometimes paramount additions to science (sometimes nearly established ones, sometimes very promising ones), unlike everything that Ms Hossenfelder and similar "critics of physics" have ever written in their whole lives.

by Luboš Motl (noreply@blogger.com) at January 26, 2015 01:25 PM

Tommaso Dorigo - Scientificblogging

Reviews In Physics - A New Journal
The publishing giant Elsevier is about to launch a new journal, Reviews in Physics. This will be a fully open-access, peer-reviewed journal which aims at providing short reviews (15 pages maximum) on physics topics at the forefront of research. The web page of the journal is here, and a screenshot is shown below.

read more

by Tommaso Dorigo at January 26, 2015 01:20 PM

CERN Bulletin

CERN Bulletin Issue No. 04-05/2015
Link to e-Bulletin Issue No. 04-05/2015Link to all articles in this issue No.

January 26, 2015 11:11 AM

Emily Lakdawalla - The Planetary Society Blog

It's Official: LightSail Test Flight Scheduled for May 2015
This May, the first of The Planetary Society's two member-funded LightSail spacecraft is slated to hitch a ride to space for a test flight aboard an Atlas V rocket.

January 26, 2015 10:33 AM

January 25, 2015

Christian P. Robert - xi'an's og

a week in Oxford

1sprI spent [most of] the past week in Oxford in connection with our joint OxWaSP PhD program, which is supported by the EPSRC, and constitutes a joint Centre of Doctoral Training in  statistical science focussing on data-­intensive environments and large-­scale models. The first cohort of a dozen PhD students had started their training last Fall with the first year spent in Oxford, before splitting between Oxford and Warwick to write their thesis.  Courses are taught over a two week block, with a two day introduction to the theme (Bayesian Statistics in my case), followed by reading, meetings, daily research talks, mini-projects, and a final day in Warwick including presentations of the mini-projects and a concluding seminar.  (involving Jonty Rougier and Robin Ryder, next Friday). This approach by bursts of training periods is quite ambitious in that it requires a lot from the students, both through the lectures and in personal investment, and reminds me somewhat of a similar approach at École Polytechnique where courses are given over fairly short periods. But it is also profitable for highly motivated and selected students in that total immersion into one topic and a large amount of collective work bring them up to speed with a reasonable basis and the option to write their thesis on that topic. Hopefully, I will see some of those students next year in Warwick working on some Bayesian analysis problem!

On a personal basis, I also enjoyed very much my time in Oxford, first for meeting with old friends, albeit too briefly, and second for cycling, as the owner of the great Airbnb place I rented kindly let me use her bike to go around, which allowed me to go around quite freely! Even on a train trip to Reading. As it was a road racing bike, it took me a trip or two to get used to it, especially on the first day when the roads were somewhat icy, but I enjoyed the lightness of it, relative to my lost mountain bike, to the point of considering switching to a road bike for my next bike… I had also some apprehensions with driving at night, which I avoid while in Paris, but got over them until the very last night when I had a very close brush with a car entering from a side road, which either had not seen me or thought I would let it pass. Gave me the opportunity of shouting Oï!


Filed under: Books, Kids, pictures, Statistics, Travel, University life Tagged: airbnb, Bayesian statistics, EPSRC, mountain bike, PhD course, PhD students, slides, slideshare, stolen bike, The Bayesian Choice, University of Oxford, University of Warwick

by xi'an at January 25, 2015 11:15 PM

arXiv blog

First Videos Created of Whole Brain Neural Activity in an Unrestrained Animal

Neuroscientists have recorded the neural activity in the entire brains of freely moving nematode worms for the first time.


The fundamental challenge of neuroscience is to understand how the nervous system controls an animal’s behavior. In recent years, neuroscientists have made great strides in determining how the collective activity of many individual neurons is critical for controlling behaviors such as arm reach in primates, song production in the zebrafinch and the choice between swimming or crawling in leeches.

January 25, 2015 05:51 PM

Peter Coles - In the Dark

Social Physics & Astronomy

When I give popular talks about Cosmology,  I sometimes look for appropriate analogies or metaphors in television programmes about forensic science, such as CSI: Crime Scene Investigation which I watch quite regularly (to the disdain of many of my colleagues and friends). Cosmology is methodologically similar to forensic science because it is generally necessary in both these fields to proceed by observation and inference, rather than experiment and deduction: cosmologists have only one Universe;  forensic scientists have only one scene of the crime. They can collect trace evidence, look for fingerprints, establish or falsify alibis, and so on. But they can’t do what a laboratory physicist or chemist would typically try to do: perform a series of similar experimental crimes under slightly different physical conditions. What we have to do in cosmology is the same as what detectives do when pursuing an investigation: make inferences and deductions within the framework of a hypothesis that we continually subject to empirical test. This process carries on until reasonable doubt is exhausted, if that ever happens.

Of course there is much more pressure on detectives to prove guilt than there is on cosmologists to establish the truth about our Cosmos. That’s just as well, because there is still a very great deal we do not know about how the Universe works.I have a feeling that I’ve stretched this analogy to breaking point but at least it provides some kind of excuse for writing about an interesting historical connection between astronomy and forensic science by way of the social sciences.

The gentleman shown in the picture on the left is Lambert Adolphe Jacques Quételet, a Belgian astronomer who lived from 1796 to 1874. His principal research interest was in the field of celestial mechanics. He was also an expert in statistics. In Quételet’s  time it was by no means unusual for astronomers to well-versed in statistics, but he  was exceptionally distinguished in that field. Indeed, Quételet has been called “the father of modern statistics”. and, amongst other things he was responsible for organizing the first ever international conference on statistics in Paris in 1853.

His fame as a statistician owed less to its applications to astronomy, however, than the fact that in 1835 he had written a very influential book which, in English, was titled A Treatise on Man but whose somewhat more verbose original French title included the phrase physique sociale (“social physics”). I don’t think modern social scientists would see much of a connection between what they do and what we do in the physical sciences. Indeed the philosopher Auguste Comte was annoyed that Quételet appropriated the phrase “social physics” because he did not approve of the quantitative statistical-based  approach that it had come to represent. For that reason Comte  ditched the term from his own work and invented the modern subject of  sociology…

Quételet had been struck not only by the regular motions performed by the planets across the sky, but also by the existence of strong patterns in social phenomena, such as suicides and crime. If statistics was essential for understanding the former, should it not be deployed in the study of the latter? Quételet’s first book was an attempt to apply statistical methods to the development of man’s physical and intellectual faculties. His follow-up book Anthropometry, or the Measurement of Different Faculties in Man (1871) carried these ideas further, at the expense of a much clumsier title.

This foray into “social physics” was controversial at the time, for good reason. It also made Quételet extremely famous in his lifetime and his influence became widespread. For example, Francis Galton wrote about the deep impact Quételet had on a person who went on to become extremely famous:

Her statistics were more than a study, they were indeed her religion. For her Quételet was the hero as scientist, and the presentation copy of his “Social Physics” is annotated on every page. Florence Nightingale believed – and in all the actions of her life acted on that belief – that the administrator could only be successful if he were guided by statistical knowledge. The legislator – to say nothing of the politician – too often failed for want of this knowledge. Nay, she went further; she held that the universe – including human communities – was evolving in accordance with a divine plan; that it was man’s business to endeavour to understand this plan and guide his actions in sympathy with it. But to understand God’s thoughts, she held we must study statistics, for these are the measure of His purpose. Thus the study of statistics was for her a religious duty.

The person  in question was of course  Florence Nightingale. Not many people know that she was an adept statistician who was an early advocate of the use of pie charts to represent data graphically; she apparently found them useful when dealing with dim-witted army officers and dimmer-witted politicians.

The type of thinking described in the quote  also spawned a number of highly unsavoury developments in pseudoscience, such as the eugenics movement (in which Galton himself was involved), and some of the vile activities related to it that were carried out in Nazi Germany. But an idea is not responsible for the people who believe in it, and Quételet’s work did lead to many good things, such as the beginnings of forensic science.

A young medical student by the name of Louis-Adolphe Bertillon was excited by the whole idea of “social physics”, to the extent that he found himself imprisoned for his dangerous ideas during the revolution of 1848, along with one of his Professors, Achile Guillard, who later invented the subject of demography, the study of racial groups and regional populations. When they were both released, Bertillon became a close confidante of Guillard and eventually married his daughter Zoé. Their second son, Adolphe Bertillon, turned out to be a prodigy.

Young Adolphe was so inspired by Quételet’s work, which had no doubt been introduced to him by his father, that he hit upon a novel way to solve crimes. He would create a database of measured physical characteristics of convicted criminals. He chose 11 basic measurements, including length and width of head, right ear, forearm, middle and ring fingers, left foot, height, length of trunk, and so on. On their own none of these individual characteristics could be probative, but it ought to be possible to use a large number of different measurements to establish identity with a very high probability. Indeed, after two years’ study, Bertillon reckoned that the chances of two individuals having all 11 measurements in common were about four million to one. He further improved the system by adding photographs, in portrait and from the side, and a note of any special marks, like scars or moles.

Bertillonage, as this system became known, was rather cumbersome but proved highly successful in a number of high-profile criminal cases in Paris. By 1892, Bertillon was exceedingly famous but nowadays the word bertillonage only appears in places like the Observer’s Azed crossword.

The main reason why Bertillon’s fame subsided and his system fell into disuse was the development of an alternative and much simpler method of criminal identification: fingerprints. The first systematic use of fingerprints on a large scale was implemented in India in 1858 in an attempt to stamp out electoral fraud.

The name of the British civil servant who had the idea of using fingerprinting in this way was Sir William James Herschel (1833-1917), the eldest child of Sir John Herschel, the astronomer, and thus the grandson of Sir William Herschel, the discoverer of Uranus. Another interesting connection between astronomy and forensic science.

 

 

 


by telescoper at January 25, 2015 04:13 PM

astrobites - astro-ph reader's digest

Grad students: apply now for ComSciCon 2015!
ComSciCon 2015

ComSciCon 2015 will be the third in the annual series of Communicating Science workshops for graduate students

Applications are now open for the Communicating Science 2015 workshop, to be held in Cambridge, MA on June 18-20th, 2015!

Graduate students at US institutions in astronomy, and all fields of science and engineering, are encouraged to apply. The application will close on March 1st.

It’s been more than two years since we announced the first ComSciCon worshop here on Astrobites. Since then, we’ve received almost 2000 applications from graduate students across the country, and we’ve welcomed about 150 of them to three national and local workshops held in Cambridge, MA. You can read about last year’s workshop to get a sense for the activities and participants at ComSciCon events.

While acceptance to the workshop is competitive, attendance of the workshop is free of charge and travel support will be provided to accepted applicants.

Participants will build the communication skills that scientists and other technical professionals need to express complex ideas to their peers, experts in other fields, and the general public. There will be panel discussions on the following topics:

  • Communicating with Non-Scientific Audiences
  • Science Communication in Popular Culture
  • Communicating as a Science Advocate
  • Multimedia Communication for Scientists
  • Addressing Diversity through Communication

In addition to these discussions, ample time is allotted for interacting with the experts and with attendees from throughout the country to discuss science communication and develop science outreach collaborations. Workshop participants will produce an original piece of science writing and receive feedback from workshop attendees and professional science communicators, including journalists, authors, public policy advocates, educators, and more.

ComSciCon attendees have founded new science communication organizations in collaboration with other students at the event, published more than 25 articles written at the conference in popular publications with national impact, and formed lasting networks with our student alumni and invited experts. Visit the ComSciCon website to learn more about our past workshop programs and participants.

ComSciCon 2014

Group photo at the 2014 ComSciCon workshop

If you can’t make it to the national workshop in June, check to see whether one of our upcoming regional workshops would be a good fit for you.

This workshop is sponsored by Harvard University, the Massachusetts Institute of Technology, University of Colorado Boulder, the American Astronomical Society, the American Association for the Advancement of Science, the American Chemical Society, and Microsoft Research.

by Nathan Sanders at January 25, 2015 03:44 PM

Peter Coles - In the Dark

Last days on the Ice

telescoper:

Earlier this month I reblogged a post about the launch of the balloon-borne SPIDER experiment in Antarctica. Here’s a follow up from last week. Spider parachuted back down to the ice on January 17th and was recovered successfully. Now the team will be leaving the ice and returning home, hopefully with some exciting science results!

I’d love to go to Antarctica, actually. When I was finishing my undergraduate studies at Cambridge I applied for a place on the British Antarctic Survey, but didn’t get accepted. I don’t suppose I’ll get the chance now, but you never know…

Originally posted on SPIDER on the Ice:

Four of the last five of the SPIDER crew– Don, Ed, Sasha, and I– are slated to leave the Ice tomorrow morning. That means this is probably my last blog post– at least until SPIDER 2! It has been an incredible few months, but I can’t say I’m all that sad for it to be ending. I’m ready to have an adventure in New Zealand and then get home to all the people I’ve missed so much while I’ve been away.

As is the nature of field campaigns, it has been an absolute roller coaster, but the highs have certainly made the lows fade in my memory. We got SPIDER on that balloon, and despite all of the complexities and possible points of failure, it worked. That’s a high I won’t be coming down from any time soon.

On top of success with our experiment, we’ve also had the privilege of…

View original 98 more words


by telescoper at January 25, 2015 01:27 PM

Tommaso Dorigo - Scientificblogging

The Plot Of The Week: CMS Search For Majorana Neutrinos
The CMS collaboration has released yesterday results of a search for Majorana neutrinos in dimuon data collected by the CMS detector in 8 TeV proton-proton collisions delivered by the LHC in 2012. If you are short of time and just need an executive summary, here it is: no such thing is seen, unfortunately, and limits are set on the production rate of heavy neutrinos N as a function of their mass. If you have five spare minutes, however, you might be interested in some more detail of the search and its results.

read more

by Tommaso Dorigo at January 25, 2015 12:39 PM

January 24, 2015

Geraint Lewis - Cosmic Horizons

The Constant Nature of the Speed of light in a vacuum
Wow! It has been a while, but I do have an excuse! I have been finishing up a book on the fine-tuning of the Universe and hopefully it will be published (and will become a really big best seller?? :) in 2015. But time to rebirth the blog, and what a better way to start that a gripe.

There's been some chatter on the interweb about a recent story about the speed of light in a vacuum being slowed down. Here's oneHere's another. Some of these squeak loudly about how the speed of light may not be "a constant", implying that something has gone horribly wrong with the Universe. Unfortunately, some of my physicsy colleagues were equally shocked but the result.

Why would one be shocked? Well, the speed of light being constant to all observers is central of Einstein's Special Theory of Relativity. Surely if these results are right, and Einstein is wrong, then science is a mess, etc etc etc.

Except there is nothing mysterious about this result. Nothing strange. In fact it was completely expected. The question boils down to what you mean by speed.
Now, you might be thinking that speed is simply related to the time it takes for a thing to travel from here to there. But we're dealing with light here, which, in classical physics is represented by oscillations in an electromagnetic field, while in our quantum picture it's oscillations in the wave function; the difference is not important.

When you first encounter electromagnetic radiation (i.e. light) you are often given a simple example of a single wave propagating in a vacuum. Every student of physics will have seen this picture at some point;
The electric (and magnetic) fields oscillate as a sin wave and the speed at which bumps in the wave move forward is the speed of light. This was one of the great successes of James Clark Maxwell, one of the greatest physicists who ever lived. In his work, he fully unified electricity and magnetism and showed that electromagnetic radiation, light, was the natural consequence. 

Without going into too many specific details, this is known as the phase velocity. For light in a vacuum, the phase velocity is equal to c.

One of the coolest things I ever learnt was Fourier series, or the notion that you can construct arbitrary wave shapes by adding together sins and cos waves. This still freaks me out a bit to this day, but instead of an electromagnetic wave being a simple sin or cos you can add waves to create a wave packet, basically a lump of light.

But when you add waves together, the result lump doesn't travel at the same speed as the waves that comprise the packet. The lump moves with what's known as the group velocity. Now, the group velocity and the phase velocity are, in general, different. In fact, they can be very different as it is possible to construct a packet that does not move at all, while all the waves making up the packet are moving at c!

So, this result was achieved by manipulating the waves to produce a packet whose group velocity was measurably smaller than a simple wave. That's it! Now, this is not meant to diminish the work of the experimenters, as this is not easy to set up and measure, but it means nothing for the speed of light, relativity etc etc. And the researchers know that!

And as I mentioned, understanding the difference between phase and group velocity has been known for a long time, with Hamilton (of Hamiltonian fame) in 1839, and Rayleigh in 1877. These initial studies were in waves in general, mainly sound waves, not necessarily light, but the mathematics are basically the same. 

Before I go, once of the best course I took as an undergraduate was called vibrations and waves. At the time, I didn't really see the importance of of what I was learning, but the mathematics was cool. I still love thinking about it. Over the years, I've come to realise that waves are everywhere, all throughout physics, science, and, well everything. Want to model a flag, make a ball and spring model. Want to make a model of matter, ball and spring. And watch the vibrations!

Don't believe me? Watch this - waves are everywhere.





by Cusp (noreply@blogger.com) at January 24, 2015 11:25 PM

Christian P. Robert - xi'an's og

would you wear those tee-shirts?!

Here are two examples of animal “face” tee-shirts I saw advertised in The New York Times and that I would not consider wearing. At any time.


Filed under: Kids, pictures Tagged: animals, Asian lady beetle, fashion, tarsier, tee-shirt, The New York Times

by xi'an at January 24, 2015 11:15 PM

Peter Coles - In the Dark

The Dormouse

Just spent an extremely enjoyable Saturday morning on the Sussex University campus for one of our Applicant Visit Days; there’ll be several more of these occasions over the next few months and I only hope we have such glorious weather for the others!

I thought I’d celebrate the fact that it all went well by posting a bit of old-fashioned good-time jazz. It’s getting on for seven years since the death of the great Humphrey Lyttelton, who was not only a fine trumpeter and bandleader but also blessed with wickedly dry sense of humour. During the late 1940s and early 1950s Humph’s band had a terrific front line consisting of Wally Fawkes on clarinet and the superb Keith Christie on trombone, led by himself on trumpet. Apparently when they did late-night gigs, Keith Christie had a habit of occasional dozing off while someone else was soloing. Not unreasonably, this behaviour reminded Humph of the Dormouse at the Mad Hatter’s Tea Party in Alice’s Adventures in Wonderland, so he decided to write a tune with that name in honour of Keith Christie. I have the studio recording of The Dormouse, which was released on Parlophone as a 78rpm single, and it’s such a blast that I love it to bits, but this is a live performance which I just came across a few days ago. It comes from a famous concert at the Royal Festival Hall in July 1951 sponsored by the National Federation of Jazz Organizations (NFJO) which featured a number of bands as well as Humph’s.

Anyway, it’s a delicious helping of New Orleans jazz served with a generous side order of English eccentricity, guaranteed to bring a smile to the most crabbed of faces. The trombone introduction and fills by Keith Christie, in whose honour the tune was written, are typically full of humour, but the improvised ensemble playing is absolutely terrific, especially from about 1.55 onwards. Humph’s band of this time didn’t have the greatest rhythm section – Humph himself joked that they often sounded like they were wearing diving boots – but the front line was world class.

ps. It definitely should be “The Dormouse” not “The Doormouse”…

pps. Unless my ears deceive me I think this number is announced by Kenneth Horne…


by telescoper at January 24, 2015 02:45 PM

Christian P. Robert - xi'an's og

brief stop in Edinburgh

Edinburgh1Yesterday, I was all too briefly in Edinburgh for a few hours, to give a seminar in the School of Mathematics, on the random forests approach to ABC model choice (that was earlier rejected). (The slides are almost surely identical to those used at the NIPS workshop.) One interesting question at the end of the talk was on the potential bias in the posterior predictive expected loss, bias against some model from the collection of models being evaluated for selection. In the sense that the array of summaries used by the random forest could fail to capture features of a particular model and hence discriminate against it. While this is correct, there is no fundamental difference with implementing a posterior probability based on the same summaries. And the posterior predictive expected loss offers the advantage of testing, that is, for representative simulations from each model, of returning the corresponding model prediction error to highlight poor performances on some models. A further discussion over tea led me to ponder whether or not we could expand the use of random forests to Bayesian quantile regression. However, this would imply a monotonicity structure on a collection of random forests, which sounds daunting…

My stay in Edinburgh was quite brief as I drove to the Highlands after the seminar, heading to Fort William, Although the weather was rather ghastly, the traffic was fairly light and I managed to get there unscathed, without hitting any of the deer of Rannoch Mor (saw one dead by the side of the road though…) or the snow banks of the narrow roads along Loch Lubnaig. And, as usual, it still was a pleasant feeling to drive through those places associated with climbs and hikes, Crianlarich, Tyndrum, Bridge of Orchy, and Glencoe. And to get in town early enough to enjoy a quick dinner at The Grog & Gruel, reflecting I must have had half a dozen dinners there with friends (or not) over the years. And drinking a great heather ale to them!


Filed under: Mountains, pictures, Statistics, Travel, University life, Wines Tagged: ABC, ABC model choice, Edinburgh, Fort William, quantile regression, random forests, Scotland, The Grog & Gruel, University of Edinburgh

by xi'an at January 24, 2015 01:18 PM

Emily Lakdawalla - The Planetary Society Blog

Addressing some common questions about Comet Lovejoy
Lowell Observatory's Matthew Knight addresses several points of confusion that have repeatedly come up in the coverage of Comet Lovejoy.

January 24, 2015 12:19 AM

January 23, 2015

CERN Bulletin

CERN Bulletin Issue No. 01-02/2015
Link to e-Bulletin Issue No. 01-02/2015Link to all articles in this issue No.

January 23, 2015 02:13 PM

CERN Bulletin

CERN Bulletin Issue No. 51-52/2014
Link to e-Bulletin Issue No. 51-52/2014Link to all articles in this issue No.

January 23, 2015 02:13 PM

CERN Bulletin

CERN Bulletin Issue No. 04-05/2015
Link to e-Bulletin Issue No. 04-05/2015Link to all articles in this issue No.

January 23, 2015 02:05 PM

astrobites - astro-ph reader's digest

The Age of Solar System Exploration
If you haven’t heard about the Rosetta mission, and the European Space Agency’s remarkable feat of landing on a comet, then you must be like it’s lander Philae: living under a rock.

What you probably also didn’t hear much about is the slew of other ways (both recent past and near future) we are exploring up-close-and-personal the more unusual parts of the Solar System. The tiny stuff; the names you didn’t memorize in grade school. That one that doesn’t get to play with the “big boys” anymore. The years of 2014 and 2015 may well be known as the time when our exploration of the solar system truly took off, as we explored asteroids, comets, and minor planets.

Here’s a look back at what we’ve accomplished in the last year, and what we’re about to achieve in the year to come.

Scroll to the bottom to see an abbreviated list of important upcoming events for these missions.

Comet_on_16_January_2015_NavCam

The most recent image of Comet 67P/C-G, taken on January 16th by the orbiting Rosetta spacecraft. Rosetta and its lander Philae (the first objects to orbit and land on a comet) will follow the comet through its orbit to closest solar approach in August 2015. Image c/o ESA

ESA’s Rosetta Mission Lands on a Comet: August 2014 – December 2015

One of the biggest science news pieces of the year, the European Space Agency’s Rosetta spacecraft reached Comet 67P/C-G (right). It began an orbit on August 6th, 2014, after a journey of more than 10 years and 6.4 billion kilometers. On November 12th, the spacecraft’s landing probe Philae became mankind’s first object to land on a comet. Unfortunately, a malfunction in the landing system resulted in Philae bouncing a kilometer off the surface, coming eventually to rest in the shadow of a cliff.

Unable to get adequate sunlight to charge its batteries, Philae quickly went into hibernation mode. Before shutting down it was able to return measurements of gaseous water vapor, but was unable to drill into the surface to measure the content of the solid ice.

Many models suggest that the high water content of Earth may have come from collisions with comets or asteroids during the late stages of the Earth’s formation. Most water is made of ordinary hydrogen and oxygen, but a tiny fraction contains a deuterium atom (a hydrogen isotope made of a proton and a neutron) in hydrogen’s place. One of the main scientific goals of pursuing comets is to identify the source of Earth’s water. The key to accomplishing this is to see if the abundance of deuterium (see this Astrobite) in a comet’s water matches the levels found on Earth. Philae’s water vapor measurements indicate a deuterium abundance more than 3 times higher than on Earth. Perhaps this suggests asteroids are more responsible for Earth’s water supply than comets.

The Rosetta team hopes to be able to confirm this result, and perhaps obtain an ice sample with Philae’s drill, if the lander wakes up. Most of the team has little doubt the lander will resume function in the coming months. But since the orbiting Rosetta still has yet to pinpoint Philae’s final landing spot, the question of when the probe will be able to get the 5 to 7 more watts of energy it needs is a tough question to answer. The comet (and accompanying spacecraft) is approaching the Sun and will reach its closest point in August 2015. The team hopes the change in scenery may bring more sunlight to Philae’s solar cells.

In the meantime, Rosetta will make a close approach orbit of the comet in February 2015 — snapping photos which should resolve details down to a few inches — and is planning to soar through an outgassing jet in July, when the comet’s tail begins to form. The Rosetta mission is scheduled to end in December 2015, although large public support for the mission may help researchers extend its lifetime into 2016.

dawndelivers

NASA’s spacecraft Dawn captures images of Ceres: our nearest dwarf planet neighbor and the largest asteroid in the asteroid belt. Dawn will enter an orbit around Ceres beginning in March 2015. Image c/o: NASA/JPL

NASA’s Dawn Mission Orbits Asteroid Ceres: March 2015 – July 2015

A few years ago, NASA’s Dawn spacecraft spent about 12 months orbiting Vesta, one of the largest asteroids in the asteroid belt. For more details about Dawn’s encounter with Vesta, see this past Astrobite.

After leaving the orbit around Vesta, Dawn has spent two and a half years traveling across the asteroid belt to catch up to Ceres, the largest known asteroid. On its own, Ceres makes up about 30% of the mass of the entire asteroid belt. It’s so massive, its gravity is strong enough to shape it into a rough sphere, so Ceres is also identified as a dwarf planet.

This week, NASA released the latest images of Ceres (left), taken as Dawn approaches the asteroid. In just a few weeks, on March 6th 2015, Dawn will enter orbit around Ceres. Asteroids and comets are pieces of the debris left over from the formation of the solar system planets. NASA wants to understand Ceres’ formation, its material makeup, and why it didn’t grow any larger. This information will help distinguish between theories that describe how planets formed in our solar system.

As has previously been discussed on Astrobites, long distance observations indicate the presence of water on the dwarf planet. Just like comets, asteroids may be responsible for delivering the Earth’s water supply, and the Dawn team hopes to improve upon these measurements. Dawn’s main science mission continues until July 2015, after which it will be shut off and remain in orbit around Ceres for a very long time.

NewHorizons

An artist’s illustration of NASA’s New Horizons spacecraft, which will pass by Pluto in July 2015. The spacecraft will not be able to maintain an orbit around the tiny dwarf planet, but will instead fly farther out into the Kuiper Belt. Image c/o: NASA/JPL

NASA’s New Horizons Flies by Pluto into the Kuiper Belt: July 14 2015

Launched in 2006, NASA’s New Horizons has been navigating space for over 9 years, and has perhaps the most exciting itinerary of all the spacecrafts on this list. To save on fuel, New Horizons executed a gravity assist (or slingshot) maneuver around Jupiter in February 2007. Some beautiful photos of Jupiter resulted as an added benefit of this layover.

New Horizons has been in frequent phases of hibernation since it’s encounter with Jupiter, and is now making its approach to Pluto: probably the most popular dwarf planet. On July 14th 2015, New Horizons will make its closest approach within 10,000 kilometers of Pluto. The spacecraft won’t be stopping at Pluto, either, but will continue into the Kuiper Belt to investigate objects astronomers can barely see from Earth. To learn more about New Horizons, and its path after Pluto, see this Astrobite.

The Future of Solar System Science

With Rosetta, Dawn, and New Horizons continuing to gather information, the future looks bright for humanity’s goal of understanding our solar system. Asteroids, comets, dwarf planets, and Kuiper Belt Objects hold many clues for how the planets — including our own — formed from the initial ingredients around the Sun. In the next decade, NASA hopes to complete a mission to capture an asteroid and bring it into orbit around the Moon. This would be a remarkable opportunity to study the remnants of the solar system’s formation.

For the present, here is a timeline of the most important events coming in the future of space exploration:

February 2015: Rosetta makes close approach to Comet 67P/C-G, resolving features as small as several inches.
March 6 2015: Dawn enters orbit around Ceres
Spring/Summer 2015: Rosetta’s lander Philae (hopefully) wakes up and takes new samples from the surface
July 14 2015: New Horizons makes closest ever approach of Pluto, on its way into the Kuiper Belt
July 2015: Rosetta scheduled to make pass through Comet’s outgassing jet
July 2015: Scheduled end of Dawn science mission
August 2015: Comet 67P/C-G closest approach of Sun: Rosetta observes comet activity and tail
December 2015: Scheduled end of Rosetta science mission
January 2019: New Horizons makes possible pass-by of Kuiper Belt Object 1110113Y

by Ben Cook at January 23, 2015 02:02 PM

Symmetrybreaking - Fermilab/SLAC

Superconducting electromagnets of the LHC

You won't find these magnets in your kitchen.

Magnets are something most of us are familiar with, but you may not know that magnets are an integral part of almost all modern particle accelerators. These magnets aren’t the same as the one that held your art to your parent’s refrigerator when you were a kid. Although they have a north and south pole just as your fridge magnets do, accelerator magnets require quite a bit of engineering.

When an electrically charged particle such as a proton moves through a constant magnetic field, it moves in a circular path. The size of the circle depends on both the strength of the magnets and the energy of the beam. Increase the energy, and the ring gets bigger; increase the strength of the magnets, the ring gets smaller.

The Large Hadron Collider is an accelerator, a crucial word that reminds us that we use it to increase the energy of the beam particles. If the strength of the magnets remained the same, then as we increased the beam energy, the size of the ring would similarly have to increase. Since the size of the ring necessarily remains the same, we must increase the strength of the magnets as the beam energy is increased. For that reason, particle accelerators employ a special kind of magnet.

When you run an electric current through a wire, it creates a magnetic field; the strength of the magnetic field is proportional to the amount of electric current. Magnets created this way are called electromagnets. By controlling the amount of current, we can make electromagnets of any strength we want. We can even reverse the magnet’s polarity by reversing the direction of the current.

Given the connection between electrical current and magnetic field strength, it is clear that we need huge currents in our accelerator magnets. To accomplish this, we use superconductors, materials that lose their resistance to electric current when they are cooled enough. And “cooled” is an understatement. At 1.9 Kelvin (about 450 degrees Fahrenheit below zero), the centers of the magnets at the LHC are one of the coldest places in the universe—colder than the temperature of space between galaxies.

Given the central role of magnets in modern accelerators, scientists and engineers at Fermilab and CERN are constantly working to make even stronger ones. Although the main LHC magnets can generate a magnetic field about 800,000 times that generated by the Earth, future accelerators will require even more. The technology of electromagnets, first observed in the early 1800s, is a vibrant and crucial part of the laboratories’ futures.


A version of this article was published in Fermilab Today.

 

Like what you see? Sign up for a free subscription to symmetry!

by Don Lincoln, Fermi National Accelerator Laboratory at January 23, 2015 02:00 PM

Peter Coles - In the Dark

Graduation Engagement

Yesterday I took part in the Winter Graduation ceremony for students in the School of Mathematical and Physical Sciences (MPS) at the University of Sussex; as Head of School it was my very pleasant duty to read out the names of the graduands as they passed across the stage at the Brighton Dome, where the ceremony takes place.

Let me first of all congratulate again all those who graduated yesterday!

The Winter ceremony is largely devoted to students graduating from postgraduate programmes, either taught (MSc or MA)  or research-based (PhD). We don’t have huge numbers of such students in MPS so I had relatively few names to read out yesterday. Most of our students graduate in the summer ceremony. Sharing the ceremony with us this time was the School of Business, Management and Economics which, by contrast, has huge taught postgraduate programmes so the acting Head of School for BMEC (as it is called) had a lot of work to do!

It was nice to have Sanjeev Bhaskar back in place as Chancellor (he was absent on filming duty for last year’s summer ceremonies), who is charming and friendly as well as frequently hilarious.

Anyway, getting to the point, graduation is a special moment for all students involved, but there was an extra extra special moment for two students in particular yesterday.

I was sitting in the front of the platform party very near Sanjeev when a male student from BMEC graduated. As well as shaking the Chancellor’s hand he had a fairly long discussion with him and slipped him what appeared to be a small box in such a way that the audience couldn’t see it. Then, after walking across the stage, the student waited at the far side instead of returning to the auditorium by going down the stairs.

Funny, I thought, but at that point I had no idea what was going on.

The next graduand was a female student. When she got to Sanjeev he shook her hand as usual but then called back the previous one, still standing on the stage, and gave the box back to him. Of course it contained an engagement ring. And so it came to pass that Jing Liu (kneeling) proposed to Qin Me (standing).

proposals

It was a wonderful moment, although it struck me as a high-risk strategy and it wasn’t at all obvious at first sight how it would turn out. She doesn’t look that sure in the picture, actually! She did, however, say “yes” and the couple are now engaged to be married. I wish them every happiness. I’m sure I speak for everyone at the ceremony when I say that it brought an extra dimension of joy to what was already a wonderfully joyous occasion.

Our lives seem to revolve around rituals of one sort or another. Graduation is one, marriage is another. This is definitely the first time I’ve seen this particular combination.

I love graduation ceremonies. As the graduands go across the stage you realize that every one of them has a unique story to tell and a whole universe of possibilities in front of them. How their lives will unfold no-one can tell, but it’s a privilege to be there for one important milestone on their journey.

UPDATE: Here’s a video of the ceremony. The big event happens about from 44:48…


by telescoper at January 23, 2015 08:49 AM

January 22, 2015

Emily Lakdawalla - The Planetary Society Blog

Field Report from Mars: Sol 3902 - January 15, 2015
Larry Crumpler gives an update on the status of Opportunity's traverse toward Marathon Valley.

January 22, 2015 11:10 PM

astrobites - astro-ph reader's digest

Simulating X-ray Binary Winds
  • Title: Stellar wind in state transitions of high-mass X-ray binaries
  • Authors: J. Čechura and P. Hadrava
  • First Author’s Institution: Astronomical Institute, Academy of Sciences, Czech Republic
  • Paper Status: Accepted for publication in Astronomy & Astrophysics

    words

    A 3D surface model of X-ray binary Cygnus X-1. Contours and lines represent regions of equal density. Fig. 10 from the paper.

How do you simulate a massive star’s behavior when its closest neighbor is a black hole? Astronomers routinely make simplifying assumptions to understand how stars behave. If there are thousands of stars orbiting one another, treat them as point masses. If there is a single, solitary star, treat it as a perfectly symmetrical sphere. But just like massless pendulums and frictionless pulleys, these ideal scenarios aren’t reality. Sometimes, to truly understand stars, you need to roll up your sleeves and start thinking about pesky details—things like three dimensions, X-ray photoionization, and the Coriolis force.

Windy with a chance of X-rays

In today’s paper, Čechura and Hadrava examine what happens to the runaway gas from the surface of massive stars—the stellar wind. In particular, they look at systems with massive stars so close to a companion neutron star or black hole that the stellar wind is jarred into a new orbit and heated to the point of emitting X-rays. This is a high-mass X-ray binary.

The authors begin with a 2D model to understand how the stellar wind behaves differently when one star is more or less massive than the other, or when the wind itself is programmed into the model in subtly different ways. As it turns out, emitting tons of X-rays is more than the end result of stellar wind particles slamming into an accretion disk. Those X-rays continue the story by ionizing nearby gas and slowing down the incoming wind. When the wind slows, the overall shape of the system changes thanks to gravity and the Coriolis force, which in turn affects how many X-rays are emitted!

Cygnus X-1’s split personality

With these variables better understood, the authors create a full-fledged 3D hydrodynamic model of a high-mass X-ray binary. A 3D model returns more accurate densities and velocities than a 2D model because the geometry is more realistic. They base this simulation on the well-studied X-ray binary Cygnus X-1, which is generally observed in one of two states: either it is emitting relatively few X-rays of high energy (low/hard), or it is emitting many X-rays of low energy (high/soft). In the low/hard state, wind from the massive star is actively flowing into an accretion disk around the companion. The high/soft state takes over when that flow is disrupted.

To simulate the transition from Cygnus X-1’s low/hard state to its high/soft state, the authors suddenly increase the X-ray luminosity of the compact companion. As a result, gas in the stellar wind never makes it to the accretion disk because it is bombarded with X-rays. It turns out that this X-ray photoionization process is even more important than the simpler 2D model suggested.

cechura_fig7

3D cross-section of Cygnus X-1’s stellar wind in the low/hard X-ray state, when material is flowing into the compact companion’s accretion disk. Each column represents a 90-degree change in viewing angle. From top to bottom, the rows show particle density, velocity magnitude, and degree of ionization. The black region in the ionization panels is an X-ray shadow, where no particles are photoionized. Fig. 7 from the paper.

high/soft X-ray state

3D cross-section of Cygnus X-1’s stellar wind in the high/soft X-ray state, when material from the stellar wind is not flowing into the compact companion’s accretion disk. As in the previous figure, the columns show three mutually perpendicular viewing angles and the rows show different physical parameters (density, velocity magnitude, and degree of ionization). Fig. 8 from the paper.

Of course, even this detailed 3D model isn’t perfect. In the future, the authors would like to more accurately consider radiative transfer as well as account for turbulence in the stellar wind. And Cygnus X-1 is a single test case! Still, this is a huge step forward from point masses, perfect spheres, or even a 2D simulation. Half the challenge in simulating reality is choosing which assumptions are reasonable tradeoffs to construct a useful model, and this paper illustrates just how important X-rays are in determining the behavior of an X-ray binary.

by Meredith Rawls at January 22, 2015 07:31 PM

Emily Lakdawalla - The Planetary Society Blog

Fountains of Water Vapor and Ice
Deepak Dhingra shares some of the latest research on Enceladus' geysers presented at the American Geophysical Union (AGU) Fall Meeting in San Francisco last month.

January 22, 2015 05:22 PM

Symmetrybreaking - Fermilab/SLAC

DECam’s nearby discoveries

The Dark Energy Camera does more than its name would lead you to believe.

The Dark Energy Camera, or DECam, peers deep into space from its mount on the 4-meter Victor Blanco Telescope high in the Chilean Andes.

Thirty percent of the camera’s observing time—about 105 nights per year—go to the team that built it: scientists working on the Dark Energy Survey.

Another small percentage of the year is spent on maintenance and upgrades to the telescope. So who else gets to use DECam? Dozens of other projects share its remaining time.

Many of them study objects far across the cosmos, but five of them investigate ones closer to home.

Overall, these five groups take up just 20 percent of the available time, but they’ve already taught us some interesting things about our planetary neighborhood and promise to tell us more in the future.

Far-out asteroids

Stony Brook University’s Aren Heinze and the University of Western Ontario’s Stanimir Metchev used DECam for four nights in early 2014 to search for unknown members of our solar system’s main asteroid belt, which sits between Mars and Jupiter.

To detect such faint objects, one needs to take a long exposure. However, the paths of these asteroids lie close enough to Earth that taking an exposure longer than a few minutes results in blurred images. Heinze and Metchev’s fix was to stack more than 100 images taken in less than two minutes each.

With this method, the team expects to measure the positions, motions and brightnesses of hundreds of main belt asteroids not seen before. They plan to release their survey results in late 2015, and an early partial analysis indicates they’ve already found hundreds of asteroids in a region smaller than DECam’s field of view—about 20 times the area of the full moon.

Whole new worlds

Scott Sheppard of the Carnegie Institution for Science in Washington DC and Chad Trujillo of Gemini Observatory in Hilo, Hawaii, use DECam to look for distant denizens of our solar system. The scientists have imaged the sky for two five-night stretches every year since November 2012.

Every night, the DECam’s sensitive 570-megapixel eye captures images of an area of sky totaling about 200 to 250 times the area of the full moon, returning to each field of view three times. Sheppard and Trujillo run the images from each night through software that tags everything that moves.

“We have to verify everything by eye,” Sheppard says. So they look through about 60 images a night, or 300 total from a perfect five-night observing run, a process that gives them a few dozen objects to study at Carnegie’s Magellan Telescope.

The scientists want to find worlds beyond Pluto and its brethren—a region called the Kuiper Belt, which lies some 30 to 50 astronomical units from the sun (compared to the Earth’s 1). On their first observing run, they caught one.

This new world, with the catalog name of 2012 VP113, comes as close as 80 astronomical units from the sun and journeys as far as 450. Along with Sedna, a minor planet discovered a decade ago, it is one of just two objects found in what was once thought of as a complete no man’s land.

Sheppard and Trujillo also have discovered another dwarf planet that is one of the top 10 brightest objects beyond Neptune, a new comet, and an asteroid that occasionally sprouts an unexpected tail of dust.

Mythical creatures

Northern Arizona University’s David Trilling and colleagues used the DECam for three nights in 2014 to look for “centaurs”—so called because they have characteristics of both asteroids and comets. Astronomers believe centaurs could be lost Kuiper Belt objects that now lie between Jupiter and Neptune.

Trilling’s team expects to find about 50 centaurs in a wide range of sizes. Because centaurs are nearer to the sun than Kuiper Belt objects, they are brighter and thus easier to observe. The scientists hope to learn more about the size distribution of Kuiper Belt objects by studying the sizes of centaurs. The group recently completed its observations and plan to report them later in 2015.

Next-door neighbors

Lori Allen of the National Optical Astronomy Observatory outside Tucson, Arizona, and her colleagues are looking for objects closer than 1.3 astronomical units from the sun. These near-Earth objects have orbits that can cross Earth’s—creating the potential for collision.

Allen’s team specializes in some of the least-studied NEOs: ones smaller than 50 meters across. 

Even small NEOs can be destructive, as demonstrated by the February 2013 NEO that exploded above Chelyabinsk, Russia. The space rock was just 20 meters wide, but the shockwave from its blast shattered windows, which caused injuries to more than 1000 people.

In 2014, Allen’s team used the DECam for 10 nights. They have 20 more nights to use in 2015 and 2016.

They have yet to release specific findings from the survey’s first year, but the researchers say they have a handle of the distribution of NEOs down to just 10 meters wide. They also expect to discover about 100 NEOs the size of the one that exploded above Chelyabinsk.

Space waste

Most surveys looking for “space junk”—inactive satellites, parts of spacecraft and the like in orbit around the Earth—can see only pieces larger than about 20 centimeters. But there’s a lot more material out there.

How much is a question Patrick Seitzer of the University of Michigan and colleagues hope to answer. They used DECam to hunt for debris smaller than 10 centimeters, or the size of a smartphone, in geosynchronous orbit.

The astronomers need to capture at least four images of each piece of debris to determine its position, motion and brightness. This can tell them about the risk from small debris to satellites in geosynchronous orbit. Their results are scheduled for release in mid-2015.
 

 

Like what you see? Sign up for a free subscription to symmetry!

by Liz Kruesi at January 22, 2015 02:00 PM

January 21, 2015

Lubos Motl - string vacua and pheno

A new paper connecting heterotic strings with an LHC anomaly
Is the LHC going to experimentally support details of string theory in a few months?

Just one week ago, I discussed a paper that has presented a model capable of explaining three approximately 2.5-sigma anomalies seen by the LHC, including the \(\tau\mu\) decay of the Higgs boson \(h\), by using a doubled Higgs sector along with the gauged \(L_\mu-L_\tau\) symmetry.

I have mentioned a speculative addition of mine: those gauge groups could somewhat naturally appear in \(E_8\times E_8\) heterotic string models, my still preferred class of string/M-theory compactifications to describe the Universe around us.

Today, there is a new paper
Explaining the CMS \(eejj\) and \(e /\!\!\!\!{p}_T jj\) Excess and Leptogenesis in Superstring Inspired \(E_6\) Models
by Dhuria and 3 more Indian co-authors that apparently connects an emerging, so far small and inconclusive experimental anomaly at the LHC, with heterotic strings.




The authors consider superstring-inspired models with an \(E_6\) group and supersymmetry whose R-parity is unbroken. And the anomaly they are able to explain is the 2.8-sigma CMS excess that I wrote about in July 2014 and that was attributed to a \(2.1\TeV\) right-handed \(W^\pm_R\)-boson.




The new Indian paper shows that it is rather natural to explain the anomaly in terms of the heterotic models with gauge groups broken to\[

E_8\times E'_8 \to E_6\times SU(3)\times E'_8

\] but they are careful about identifying the precise new particles that create the excess. In fact, it seems that the right-handed gauge bosons are not ideal to play the role. They will lead to problems with baryogenesis. All the baryon asymmetry will disappear because \(B-L\) and \(B+L\) are violated, either at low energies or intensely at the electroweak scale. So this theory would apparently predict that all matter annihilates against the antimatter.

Instead of the right-handed gauge bosons, they promote new exotic sleptons that result from the breaking of \(E_6\) down to a cutely symmetric maximal subgroup\[

E_6\to SU(3)_C \times SU(3)_L \times SU(3)_R

\] under which the fundamental representation decomposes as\[

{\bf 27} = ({\bf 3}, {\bf 3}, {\bf 1}) \oplus
({\bf \bar 3}, {\bf 1}, {\bf \bar 3}) \oplus
({\bf 1}, {\bf \bar 3}, {\bf 3})

\] which should look beautiful to all devout Catholics who love the Holy Trinity. The three \(SU(3)\) factors represent the QCD color, the left-handed extension of the electroweak \(SU(2)_W\), and its right-handed partner.

There are lots of additional technical features that you may want to study in the 8-page-long paper. But I want to emphasize some big-picture, emotional message. And it is the following.

The superpartners have been considered the most likely new particles that may emerge in particle physics experiments. They have the best motivation – the supersymmetric solution to the hierarchy problem (the lightness of the Higgs boson) – to appear at low energies. On the other hand, it's "sensible" to assume that all other new particles, e.g. those linked to grand unification or extra dimensions, are tied to very high energies and therefore unobservable in the near future.

But this expectation isn't rock-solid. In fact, just like the Standard Model fermions are light, there may be additional particles that naturally result from GUT or string theory model building that are light and accessible to the LHC, too. One could expect that "it is likely" that the gauge coupling unification miracle from minimal SUSY GUT ceases to work. But it may work, perhaps with some fixes, and although the fixes are disadvantages, the models may have some advantages that are even more irresistible than the gauge coupling unification.

The possibility that some other, non-SUSY aspects of string models will be found first is here and it is unbelievably attractive, indeed. I would bet that this particular ambitious scenario is "less likely than yes/not" (or whatever is the opposite to "more likely than not" LOL) but the probability isn't zero.

A lighter topic: intestines and thumbs on feet



By Don Lincoln. ;-)

by Luboš Motl (noreply@blogger.com) at January 21, 2015 10:11 PM

arXiv blog

How the Next Generation of Botnets Will Exploit Anonymous Networks, and How to Beat Them

Computer scientists are already devising strategies for neutralizing the next generation of malicious botnets .

 

January 21, 2015 08:29 PM

Quantum Diaries

How to build your own particle detector

This article ran in symmetry on Jan. 20, 2015

Make a cloud chamber and watch fundamental particles zip through your living room! Image: Sandbox Studio, Chicago

Make a cloud chamber and watch fundamental particles zip through your living room! Image: Sandbox Studio, Chicago

The scale of the detectors at the Large Hadron Collider is almost incomprehensible: They weigh thousands of tons, contain millions of detecting elements and support a research program for an international community of thousands of scientists.

But particle detectors aren’t always so complicated. In fact, some particle detectors are so simple that you can make (and operate) them in your own home.

The Continuously Sensitive Diffusion Cloud Chamber is one such detector. Originally developed at UC Berkeley in 1938, this type of detector uses evaporated alcohol to make a ‘cloud’ that is extremely sensitive to passing particles.

Cosmic rays are particles that are constantly crashing into the Earth from space. When they hit Earth’s atmosphere, they release a shower of less massive particles, many of which invisibly rain down to us.

When a cosmic ray zips through a cloud, it creates ghostly particle tracks that are visible to the naked eye.

Building a cloud chamber is easy and requires only a few simple materials and steps:

Materials:

  • Clear plastic or glass tub (such as a fish tank) with a solid lid (plastic or metal)
  • Felt
  • Isopropyl alcohol (90% or more. You can find this at a pharmacy or special order from a chemical supply company. Wear safety goggles when handling the alcohol.)
  • Dry ice (frozen carbon dioxide. Often used at fish markets and grocery stores to keep products cool. Wear thick gloves when handling the dry ice.)

Steps:

  1. Cut the felt so that it is the size of the bottom of the fish tank. Glue it down inside the tank (on the bottom where the sand and fake treasure chests would normally go).
  2. Once the felt is secured, soak it in the isopropyl alcohol until it is saturated. Drain off any excess alcohol.
  3. Place the lid on top of dry ice so that it lies flat. You might want to have the dry ice in a container or box so that it is more stable.
  4. Flip the tank upside down, so that the felt-covered bottom of the tank is on top, and place the mouth of the tank on top of the lid.
  5. Wait about 10 minutes… then turn off the lights and shine a flashlight into your tank.
Artwork by: Sandbox Studio, Chicago

What is happening inside your cloud chamber?

The alcohol absorbed by the felt is at room temperature and is slowly evaporating into the air. But as the evaporated alcohol sinks toward the dry ice, it cools down and wants to turn back into a liquid.

The air near the bottom of the tank is now supersaturated, which means that it is just below its atmospheric dew point. And just as water molecules cling to blades of grass on cool autumn mornings, the atmospheric alcohol will form cloud-like droplets on anything it can cling to.

Particles, coming through!

When a particle zips through your cloud chamber, it bumps into atmospheric molecules and knocks off some of their electrons, turning the molecules into charged ions. The atmospheric alcohol is attracted to these ions and clings to them, forming tiny droplets.

The resulting tracks left behind look like the contrails of airplane—long spindly lines marking the particle’s path through your cloud chamber.

What you can tell from your tracks?

Many different types of particles might pass through your cloud chamber. It might be hard to see, but you can actually differentiate between the types of particles based on the tracks they leave behind.

Short, fat tracks

Sorry—not a cosmic ray. When you see short, fat tracks, you’re seeing an atmospheric radon atom spitting out an alpha particle (a clump of two protons and two neutrons). Radon is a naturally occurring radioactive element, but it exists in such low concentrations in the air that it is less radioactive than peanut butter. Alpha particles spat out of radon atoms are bulky and low-energy, so they leave short, fat tracks.

Long, straight track

Congratulations! You’ve got muons! Muons are the heavier cousins of the electron and are produced when a cosmic ray bumps into an atmospheric molecule high up in the atmosphere. Because they are so massive, muons bludgeon their way through the air and leave clean, straight tracks.

Zig-zags and curly-cues

If your track looks like the path of a lost tourist in a foreign city, you’re looking at an electron or positron (the electron’s anti-matter twin). Electrons and positrons are created when a cosmic ray crashes into atmospheric molecules. Electrons and positrons are light particles and bounce around when they hit air molecules, leaving zig-zags and curly-cues.

Forked tracks

If your track splits, congratulations! You just saw a particle decay. Many particles are unstable and will decay into more stable particles. If your track suddenly forks, you are seeing physics in action!

 

 

Sarah Charley

by Fermilab at January 21, 2015 06:01 PM

ZapperZ - Physics and Physicists

GUTs and TOEs
Another informative video, for the general public, from Don Lincoln and Fermilab.



Of course, if you had read my take on the so-called "Theory of Everything", you would know my stand on this when we consider emergent phenomena.

Zz.

by ZapperZ (noreply@blogger.com) at January 21, 2015 05:23 PM

Quantum Diaries

Lepton Number Violation, Doubly Charged Higgs Bosons, and Vector Boson Fusion at the LHC

Doubly charged Higgs bosons and lepton number violation are wickedly cool.

Hi Folks,

The Standard Model (SM) of particle physics is presently the best description of matter and its interactions at small distances and high energies. It is constructed based on observed conservation laws of nature. However, not all conservation laws found in the SM are intentional, for example lepton number conservation. New physics models, such as those that introduce singly and doubly charged Higgs bosons, are flexible enough to reproduce previously observed data but can either conserve or violate these accidental conservation laws. Therefore, some of the best ways of testing if these types of laws are much more fundamental may be with the help of new physics.

Observed Conservation Laws of Nature and the Standard Model

Conservation laws, like the conservation of energy or the conservation of linear momentum, have the most remarkable impact on life and the universe. Conservation of energy, for example, tells us that cars need fuel to operate and perpetual motion machines can never exist. A football sailing across a pitch does not suddenly jerk to the left at 90º because conversation of linear momentum, unless acted upon by a player (a force). This is Newton’s First Law of Motion. In particle physics, conservation laws are not taken lightly; they dictate how particles are allowed to behave and forbid some processes from occurring. To see this in action, lets consider a top quark (t) decaying into a W boson and a bottom quark (b).

asdasd

asdasd

A top quark cannot radiate a W+ boson and remain a top quark because of conservation of electric charge. Top quarks have an electric charge of +2/3 e, whereas W+ bosons have an electric charge of +1e, and we know quite well that

(+2/3)e ≠ (+1)e + (+2/3)e.

For reference a proton has an electric charge of +1e and an electron has an electric charge of -1e. However, a top quark can radiate a W+ boson and become a bottom quark, which has electric charge of -1/3e. Since

(+2/3)e = (+1)e + (-1/3)e,

we see that electric charge is conserved.

Conservation of energy, angular momentum, electric charged, etc., are so well-established that the SM is constructed to automatically obey these laws. If we pick any mathematical term in the SM that describes how two or more particles interact (for example how the top quark, bottom quark, and W boson interact with each other) and then add up the electric charge of all the participating particles, we will find that the total electric charge is zero:

The top quark-bottom quark-W boson vertices in the Standard Model, and the net charge carried by each interaction.

The top quark-bottom quark-W boson interaction terms in the Standard Model. Bars above quarks indicate that the quark is an antiparticle and has opposite charges.

 

Accidental Conservation Laws

However, not all conservation laws that appear in the SM are intentional. Conservation of lepton number is an example of this. A lepton is any SM fermion that does not interact with the strong nuclear force. There are six leptons in total: the electron, muon, tau, electron-neutrino, muon-neutrino, and tau-neutrino. We assign lepton number

L=1 to all leptons (electron, muon, tau, and all three neutrinos),

L=-1 to all antileptons (positron, antimuon, antitau, and all three antineutrinos),

L=0 to all other particles.

With these quantum number assignments, we see that lepton number is a conserved in the SM. To clarify this important point: we get lepton number conservation for free due to our very rigid requirements when constructing the SM, namely the correct conservation laws (e.g., electric and color charge) and particle content. Since lepton number conservation was not intentional, we say that lepton number is accidentally conserved. Just as we counted the electric charge for the top-bottom-W interaction, we can count the net lepton number for the electron-neutrino-W interaction in the SM and see that lepton number really is zero:

Words

The W boson-neutrino-electron interaction terms in the Standard Model. Bars above leptons indicate that the lepton is an antiparticle and has opposite charges.

However, lepton number conservation is not required to explain data. At no point in constructing the SM did we require that it be conserved. Because of this, many physicists question whether lepton number is actually conserved. It may be, but we do not know. This is indeed one topic that is actively researched. An interesting example of a scenario in which lepton number conservation could be tested is the class of theories with singly and doubly charged Higgs boson. That is right, there are theories containing additional Higgs bosons that an electric charged equal or double the electric charge of the proton.

as

Models with scalar SU(2) triplets contain additional neutral Higgs bosons as well as singly and doubly charged Higgs bosons.

Doubly charged Higgs bosons have an electric charge that is twice as large as a proton (2e), which leads to rather peculiar properties. As discussed above, every interaction between two or more particles must respect the SM conservation laws, such as conservation of electric charge. Because of this, a doubly charged Higgs (+2e) cannot decay into a top quark (+2/3 e) and an antibottom quark (+1/3 e),

(+2)e ≠ (+2/3)e + (+1/3)e.

However, a doubly charged Higgs (+2e) can decay into two W bosons (+1e) or two antileptons (+1e) with the same electric charge,

(+2)e = (+1)e + (+1)e.

but that is it. A doubly charged Higgs boson cannot decay into any other pair of SM particles because it would violate electric charge conservation. For these two types of interactions, we can also check whether or not lepton number is conserved:

For the decay into same-sign W boson pairs, the total lepton number is 0L + 0L + 0L = 0L. In this case, lepton number is conserved!

For the decay into same-sign leptons pairs, the total lepton number is 0L + (-1)L + (-1)L = -2L. In this case, lepton number is violated!

Words

Doubly charged Higgs boson interactions for same-sign W boson pairs and same-sign electron pairs. Bars indicate antiparticles. C’s indicate charge flipping.

Therefore if we observe a doubly charged Higgs decaying into a pair of same-sign leptons, then we have evidence that lepton number is violated. If we only observe doubly charged Higgs decaying into same-sign W bosons, then one may speculate that lepton number is conserved in the SM.

Doubly Charged Higgs Factories

Doubly charged Higgs bosons do not interact with quarks (otherwise it would violate electric charge conservation), so we have to rely on vector boson fusion (VBF) to produce them. VBF is when two bosons from on-coming quarks are radiated and then scatter off each other, as seen in the diagram below.

Figure 2: Diagram depicting the process known as WW Scattering, where two quarks from two protons each radiate a W boson that then elastically interact with one another.

Diagram depicting the process known as WW Scattering, where two quarks from two protons each radiate a W boson that then elastically interact with one another.

If two down quarks, one from each oncoming proton, radiate a W- boson (-1e) and become up quarks, the two W- bosons can fuse into a negatively, doubly charged Higgs (-2e). If lepton number is violated, the Higgs boson can decay into a pair of same-sign electrons (2x -1e). Counting lepton number at the beginning of the process (L = 0 – 0 = 0) and at the end (L = 0 – 2 = -2!), we see that it changes by two units!

Same-sign W- pairs fusing into a doubly charged Higgs boson that decays into same-sign electrons.

Same-sign W- pairs fusing into a doubly charged Higgs boson that decays into same-sign electrons.

If lepton number is not violated, we will never see this decay and only see decays to two very, very energetic W- boson (-1e). Searching for vector boson fusion as well as lepton number violation are important components of the overarching Large Hadron Collider (LHC) research program at CERN. Unfortunately, there is no evidence for the existence of doubly charged scalars. On the other hand, we do have evidence for vector boson scattering (VBS) of the same-sign W bosons! Additional plots can be found on ATLAS’ website.  Reaching this tremendous milestone is a triumph for the LHC experiments. Vector boson fusion is a very, very, very, very, very rare process in the Standard Model and difficult to separate from other SM processes. Finding evidence for it is a first step in using the VBF process as a probe of new physics.

Words. Credit: Junjie Zhu (Michigan)

Same-sign W boson scattering candidate event at the LHC ATLAS experiment. Slide credit: Junjie Zhu (Michigan)

We have observed that some quantities, like momentum and electric charge, are conserved in nature. Conservation laws are few and far between, but are powerful. The modern framework of particle physics has these laws built into them, but has also been found to accidentally conserve other quantities, like lepton number. However, as lepton number is not required to reproduce data, it may be the case that these accidental laws are not, in fact, conserved. Theories that introduce charged Higgs bosons can reproduce data but also predict new interactions, such as doubly charged Higgs bosons decaying to same-sign W boson pairs and, if lepton number is violated, to same-sign charged lepton pairs. These new, exotic particles can be produced through vector boson fusion of two same-sign W boson pairs. VBF is a rare process in the SM and can greatly increase if new particles exist. At last, there is evidence for vector boson scattering of same-sign W bosons, and may be the next step to discovering new particles and new laws of nature!

Happy Colliding

- Richard (@BraveLittleMuon)

by Richard Ruiz at January 21, 2015 04:16 PM

Clifford V. Johnson - Asymptotia

Flowers of the Sky
Augsburger_Wunderzeichenbuch,_Folio_52 Here is a page of a lovely set of (public domain) images of comets and meteors, as depicted in various ways through the centuries. The above sample is from the famous [...] Click to continue reading this post

by Clifford at January 21, 2015 04:15 PM

Tommaso Dorigo - Scientificblogging

One Year In Pictures
A periodic backup of my mobile phone yesterday - mainly pictures and videos - was the occasion to give a look back at things I did and places I visited in 2014, for business and leisure. I thought it would be fun to share some of those pictures with you, with sparse comments. I know, Facebook does this for you automatically, but what does Facebook know of what is meaningful and what isn't ? So here we go.
The first pic was taken at Beaubourg, in Paris - it is a sculpture I absolutely love: "The king plays with the queen" by Max Ernst.



Still in Paris (for a vacation at the beginning of January), the grandiose interior of the Opera de Paris...

read more

by Tommaso Dorigo at January 21, 2015 02:12 PM

January 20, 2015

Jester - Resonaances

Planck: what's new
Slides from the recent Planck collaboration meeting are now available online. One can find there preliminary results that include an input from Planck's measurements of the polarization of the  Cosmic Microwave Background (some which were previously available via the legendary press release in French). I already wrote about the new  important limits on dark matter annihilation cross section. Here I picked up a few more things that may be of interest for a garden variety particle physicist.








  • ΛCDM. 
    Here is a summary of Planck's best fit parameters of the standard cosmological model with and without the polarization info:

    Note that the temperature-only numbers are slightly different than in the 2013 release, because of improved calibration and foreground cleaning.  Frustratingly, ΛCDM remains  solid. The polarization data do not change the overall picture, but they shrink some errors considerably. The Hubble parameter remains at a low value; the previous tension with Ia supernovae observations seems to be partly resolved and blamed on systematics on the supernovae side.  For the large scale structure fans, the parameter σ8 characterizing matter fluctuations today remains at a high value, in some tension with weak lensing and cluster counts. 
  • Neff.
    There are also better limits on deviations from ΛCDM. One interesting result is the new improved constraint on the effective number of neutrinos, Neff in short. The way this result is presented may be confusing.  We know perfectly well there are exactly 3 light active (interacting via weak force) neutrinos; this has been established in the 90s at the LEP collider, and Planck has little to add in this respect. Heavy neutrinos, whether active or sterile, would not show in this measurement at all.  For light sterile neutrinos, Neff implies an upper bound on the mixing angle with the active ones. The real importance of  Neff lies in that it counts any light particles (other than photons) contributing to the energy density of the universe at the time of CMB decoupling. Outside the standard model neutrinos, other theorized particles could contribute any real positive number to Neff, depending on their temperature and spin. A few years ago there have been consistent hints of Neff  much larger 3, which would imply physics beyond the standard model. Alas, Planck has shot down these claims. The latest number combining Planck and Baryon Acoustic Oscillations is Neff =3.04±0.18, spot on 3.046 expected from the standard model neutrinos.  This represents an important constraint on any new physics model with very light (less than eV) particles. 
  • Σmν.
    The limit on the sum of the neutrino masses keeps improving and gets into a really interesting regime. Recall that, from oscillation experiments, we can extract the neutrino mass differences: Δm32 ≈ 0.05 eV and Δm12≈0.009 eV up to a sign, but we don't know their absolute masses.  Planck and others have already excluded the possibility that all 3 neutrinos have approximately the same mass. Now they are not far from probing the so-called inverted hierarchy, where two neutrinos have approximately the same mass and the 3rd is much lighter, in which case Σmν ≈ 0.1 eV. Planck and Baryon Acoustic Oscillations set the limit Σmν < 0.16 eV at 95% CL, however this result is not strongly advertised because it is sensitive to the value of the Hubble parameter. Including non-Planck measurements leads to a weaker, more conservative limit Σmν < 0.23 eV, the same as quoted in the 2013 release. 
  • CνB.
    For dessert, something cool. So far we could observe the cosmic neutrino background only through its contribution to the  energy density of radiation in the early universe. This affects observables that can be inferred from the CMB acoustic peaks, such as the Hubble expansion rate or the time of matter-radiation equality. Planck, for the first time, probes the properties of the CνB. Namely, it measures the  effective sound speed ceff and viscosity cvis parameters, which affect the growth of perturbations in the CνB. Free-streaming particles like the neutrinos should have ceff^2 =  cvis^2 = 1/3, while Planck measures ceff^2 = 0.3256±0.0063 and  cvis^2 = 0.336±0.039. The result is unsurprising, but it may help constraining some more exotic models of neutrino interactions. 


To summarize, Planck continues to deliver disappointing results, and there's still more to follow ;)

by Jester (noreply@blogger.com) at January 20, 2015 11:48 PM

The n-Category Cafe

The Univalent Perspective on Classifying Spaces

I feel like I should apologize for not being more active at the Cafe recently. I’ve been busy, of course, and also most of my recent blog posts have been going to the HoTT blog, since I felt most of them were of interest only to the HoTT crowd (by which I mean, “people interested enough in HoTT to follow the HoTT blog” — which may of course include many Cafe readers as well). But today’s post, while also inspired by HoTT, is less technical and (I hope) of interest even to “classical” higher category theorists.

In general, a classifying space for bundles of <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>’s is a space <semantics>B<annotation encoding="application/x-tex">B</annotation></semantics> such that maps <semantics>YB<annotation encoding="application/x-tex">Y\to B</annotation></semantics> are equivalent to bundles of <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>’s over <semantics>Y<annotation encoding="application/x-tex">Y</annotation></semantics>. In classical algebraic topology, such spaces are generally constructed as the geometric realization of the nerve of a category of <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>’s, and as such they may be hard to visualize geometrically. However, it’s generally useful to think of <semantics>B<annotation encoding="application/x-tex">B</annotation></semantics> as a space whose points are <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>’s, so that the classifying map <semantics>YB<annotation encoding="application/x-tex">Y\to B</annotation></semantics> of a bundle of <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>’s assigns to each <semantics>yY<annotation encoding="application/x-tex">y\in Y</annotation></semantics> the corresponding fiber (which is an <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>). For instance, the classifying space <semantics>BO<annotation encoding="application/x-tex">B O</annotation></semantics> of vector bundles can be thought of as a space whose points are vector spaces, where the classifying map of vector bundle assigns to each point the fiber over that point (which is a vector space).

In classical algebraic topology, this point of view can’t be taken quite literally, although we can make some use of it by identifying a classifying space with its representable functor. For instance, if we want to define a map <semantics>f:BOBO<annotation encoding="application/x-tex">f:B O\to B O</annotation></semantics>, we’d like to say “a point <semantics>vBO<annotation encoding="application/x-tex">v\in B O</annotation></semantics> is a vector space, so let’s do blah to it and get another vector space <semantics>f(v)BO<annotation encoding="application/x-tex">f(v)\in B O</annotation></semantics>. We can’t do that, but we can do the next best thing: if blah is something that can be done fiberwise to a vector bundle in a natural way, then since <semantics>Hom(Y,BO)<annotation encoding="application/x-tex">Hom(Y,B O)</annotation></semantics> is naturally equivalent to the collection of vector bundles over <semantics>Y<annotation encoding="application/x-tex">Y</annotation></semantics>, our blah defines a natural transformation <semantics>Hom(,BO)Hom(,BO)<annotation encoding="application/x-tex">Hom(-,B O) \to Hom(-,B O)</annotation></semantics>, and hence a map <semantics>f:BOBO<annotation encoding="application/x-tex">f:B O \to B O</annotation></semantics> by the Yoneda lemma.

However, in higher category theory and homotopy type theory, we can really take this perspective literally. That is, if by “space” we choose to mean “<semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-groupoid” rather than “topological space up to homotopy”, then we can really define the classifying space to be the <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-groupoid of <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>’s, whose points (objects) are <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>’s, whose morphisms are equivalences between <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>’s, and so on. Now, in defining a map such as our <semantics>f<annotation encoding="application/x-tex">f</annotation></semantics>, we can actually just give a map from <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>’s to <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>’s, as long as we check that it’s functorial on equivalences — and if we’re working in HoTT, we don’t even have to do the second part, since everything we can write down in HoTT is automatically functorial/natural.

This gives a different perspective on some classifying-space constructions that can be more illuminating than a classical one. Below the fold I’ll discuss some examples that have come to my attention recently.

All of these examples have to do with the classifying space of “types equivalent to <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>” for some fixed <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>. Such a classifying space, often denoted <semantics>BAut(X)<annotation encoding="application/x-tex">B Aut(X)</annotation></semantics>, has the property that maps <semantics>YBAut(X)<annotation encoding="application/x-tex">Y \to B Aut(X)</annotation></semantics> are equivalent to maps (perhaps “fibrations” or “bundles”) <semantics>ZY<annotation encoding="application/x-tex">Z\to Y</annotation></semantics> all of whose fibers are equivalent (a homotopy type theorist might say “merely equivalent”) to <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>. The notation <semantics>BAut(X)<annotation encoding="application/x-tex">B Aut(X)</annotation></semantics> accords with the classical notation <semantics>BG<annotation encoding="application/x-tex">B G</annotation></semantics> for the delooping of a (perhaps <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-) group: in fact this is a delooping of the group of automorphisms of <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>.

Categorically (and homotopy-type-theoretically), we simply define <semantics>BAut(X)<annotation encoding="application/x-tex">B Aut(X)</annotation></semantics> to be the full sub-<semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-groupoid of <semantics>Gpd<annotation encoding="application/x-tex">\infty Gpd</annotation></semantics> (the <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-groupoid of <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-groupoids) whose objects are those equivalent to <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>. You might have thought I was going to say the full sub-<semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-groupoid on the single object <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>, and that would indeed give us an equivalent result, but the examples I’m about to discuss really do rely on having all the other equivalent objects in there. In particular, note that an arbitrary object of <semantics>BAut(X)<annotation encoding="application/x-tex">B Aut(X)</annotation></semantics> is an <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-groupoid that admits some equivalence to <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>, but no such equivalence has been specified.

Example 1: <semantics>BAut(2)<annotation encoding="application/x-tex">B Aut(2)</annotation></semantics>

As the first example, let <semantics>X=2={0,1}<annotation encoding="application/x-tex">X = 2 = \{0,1\}</annotation></semantics>, the standard discrete space with two points. Then <semantics>Aut(2)=C 2<annotation encoding="application/x-tex">Aut(2) = C_2</annotation></semantics>, the cyclic group on 2 elements, and so <semantics>BAut(2)=BC 2=K(C 2,1)<annotation encoding="application/x-tex">B Aut(2) = B C_2 = K(C_2,1)</annotation></semantics>. Since <semantics>C 2<annotation encoding="application/x-tex">C_2</annotation></semantics> is an abelian group, <semantics>BC 2<annotation encoding="application/x-tex">B C_2</annotation></semantics> again has a (2-)group structure, i.e. we should have a multiplication operation <semantics>BC 2×BC 2BC 2<annotation encoding="application/x-tex">B C_2 \times B C_2 \to B C_2</annotation></semantics>, an identity, inversion, etc.

Using the equivalence <semantics>BC 2BAut(2)<annotation encoding="application/x-tex">B C_2 \simeq B Aut(2)</annotation></semantics>, we can describe all of these operations directly. A point <semantics>ZBAut(2)<annotation encoding="application/x-tex">Z \in B Aut(2)</annotation></semantics> is a space that’s equivalent to <semantics>2<annotation encoding="application/x-tex">2</annotation></semantics>, but without a specified equivalence. Thus, <semantics>Z<annotation encoding="application/x-tex">Z</annotation></semantics> is a set with two elements, but we haven’t chosen either of those elements to call “<semantics>0<annotation encoding="application/x-tex">0</annotation></semantics>” or “<semantics>1<annotation encoding="application/x-tex">1</annotation></semantics>”. As long as we perform constructions on <semantics>Z<annotation encoding="application/x-tex">Z</annotation></semantics> without making such an unnatural choice, we’ll get maps that act on <semantics>BAut(2)<annotation encoding="application/x-tex">B Aut(2)</annotation></semantics> and hence <semantics>BC 2<annotation encoding="application/x-tex">B C_2</annotation></semantics> as well.

The identity element of <semantics>BAut(2)<annotation encoding="application/x-tex">B Aut(2)</annotation></semantics> it’s fairly obvious: there’s only one canonical element of <semantics>BAut(2)<annotation encoding="application/x-tex">B Aut(2)</annotation></semantics>, namely <semantics>2<annotation encoding="application/x-tex">2</annotation></semantics> itself. The multiplication is not as obvious, and there may be more than one way to do it, but after messing around with it a bit you may come to the same conclusion I did: the product of <semantics>Z,WBAut(2)<annotation encoding="application/x-tex">Z,W\in B Aut(2)</annotation></semantics> should be <semantics>Iso(Z,W)<annotation encoding="application/x-tex">Iso(Z,W)</annotation></semantics>, the set of isomorphisms between <semantics>Z<annotation encoding="application/x-tex">Z</annotation></semantics> and <semantics>W<annotation encoding="application/x-tex">W</annotation></semantics>. Note that when <semantics>Z<annotation encoding="application/x-tex">Z</annotation></semantics> and <semantics>W<annotation encoding="application/x-tex">W</annotation></semantics> are 2-element sets, so is <semantics>Iso(Z,W)<annotation encoding="application/x-tex">Iso(Z,W)</annotation></semantics>, but in general there’s no way to distinguish either of those isomorphisms from the other one, nor is <semantics>Iso(Z,W)<annotation encoding="application/x-tex">Iso(Z,W)</annotation></semantics> naturally isomorphic to <semantics>Z<annotation encoding="application/x-tex">Z</annotation></semantics> or <semantics>W<annotation encoding="application/x-tex">W</annotation></semantics>. It is, however, obviously commutative: <semantics>Iso(Z,W)Iso(W,Z)<annotation encoding="application/x-tex">Iso(Z,W) \cong Iso(W,Z)</annotation></semantics>.

Moreover, if <semantics>Z=2<annotation encoding="application/x-tex">Z=2</annotation></semantics> is the identity element, then <semantics>Iso(2,W)<annotation encoding="application/x-tex">Iso(2,W)</annotation></semantics> is naturally isomorphic to <semantics>W<annotation encoding="application/x-tex">W</annotation></semantics>: we can define <semantics>Iso(2,W)W<annotation encoding="application/x-tex">Iso(2,W) \to W</annotation></semantics> by evaluating at <semantics>02<annotation encoding="application/x-tex">0\in 2</annotation></semantics>. Similarly, <semantics>Iso(Z,2)Z<annotation encoding="application/x-tex">Iso(Z,2)\cong Z</annotation></semantics>, so our “identity element” has the desired property.

Furthermore, if <semantics>Z=W<annotation encoding="application/x-tex">Z=W</annotation></semantics>, then <semantics>Iso(Z,Z)<annotation encoding="application/x-tex">Iso(Z,Z)</annotation></semantics> does have a distinguished element, namely the identity. Thus, it naturally equivalent to <semantics>2<annotation encoding="application/x-tex">2</annotation></semantics> by sending the identity to <semantics>02<annotation encoding="application/x-tex">0\in 2</annotation></semantics>. So every element of <semantics>BAut(2)<annotation encoding="application/x-tex">B Aut(2)</annotation></semantics> is its own inverse. The trickiest part is proving that this operation is associative. I’ll leave that to the reader (or you can try to decipher my Coq code).

(We did have to make some choices about whether to use <semantics>02<annotation encoding="application/x-tex">0\in 2</annotation></semantics> or <semantics>12<annotation encoding="application/x-tex">1\in 2</annotation></semantics>. I expect that as long as we make those choices consistently, making them differently will result in equivalent 2-groups.)

Example 2: An incoherent idempotent

In 1-category theory, an idempotent is a map <semantics>f:AA<annotation encoding="application/x-tex">f:A\to A</annotation></semantics> such that <semantics>ff=f<annotation encoding="application/x-tex">f \circ f = f</annotation></semantics>. In higher category theory, the equality <semantics>ff=f<annotation encoding="application/x-tex">f \circ f = f</annotation></semantics> must be weakened to an isomorphism or equivalence, and then treated as extra data on which we ought to ask for additional axioms, such as that the two induced equivalences <semantics>ffff<annotation encoding="application/x-tex">f \circ f \circ f \simeq f</annotation></semantics> coincide (up to an equivalence, of course, which then satisfies its own higher laws, etc.).

A natural question is if we have only an equivalence <semantics>fff<annotation encoding="application/x-tex">f \circ f \simeq f</annotation></semantics>, whether it can be “improved” to a “fully coherent” idempotent in this sense. Jacob Lurie gave the following counterexample in Warning 1.2.4.8 of Higher Algebra:

let <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> denote the group of homeomorphisms of the unit interval <semantics>[0,1]<annotation encoding="application/x-tex">[0,1]</annotation></semantics> which fix the endpoints (which we regard as a discrete group), and let <semantics>λ:GG<annotation encoding="application/x-tex">\lambda : G \to G</annotation></semantics> denote the group homomorphism given by the formula

<semantics>λ(g)(t)={12g(2t) if0t12 t if12t1.<annotation encoding="application/x-tex"> \lambda(g)(t) = \begin{cases} \frac{1}{2} g(2t) & \quad if\; 0\le t \le \frac{1}{2}\\ t & \quad if\; \frac{1}{2}\le t \le 1. \end{cases} </annotation></semantics>

Choose an element <semantics>hG<annotation encoding="application/x-tex">h\in G</annotation></semantics> such that <semantics>h(t)=2t<annotation encoding="application/x-tex">h(t)=2t</annotation></semantics> for <semantics>0t14<annotation encoding="application/x-tex">0\le t\le \frac{1}{4}</annotation></semantics>. Then <semantics>λ(g)h=hλ(λ(g))<annotation encoding="application/x-tex">\lambda(g)\circ h = h\circ \lambda(\lambda(g))</annotation></semantics> for each <semantics>gG<annotation encoding="application/x-tex">g\in G</annotation></semantics>, so that the group homomorphisms <semantics>λ,λ 2:GG<annotation encoding="application/x-tex">\lambda,\lambda^2 : G\to G</annotation></semantics> are conjugate to one another. It follows that the induced map of classifying spaces <semantics>e:BGBG<annotation encoding="application/x-tex">e:B G \to B G</annotation></semantics> is homotopic to <semantics>e 2<annotation encoding="application/x-tex">e^2</annotation></semantics>, and therefore idempotent in the homotopy category of spaces. However… <semantics>e<annotation encoding="application/x-tex">e</annotation></semantics> cannot be lifted to a [coherent] idempotent in the <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-category of spaces.

Let’s describe this map <semantics>e<annotation encoding="application/x-tex">e</annotation></semantics> in the more direct way I suggested above. Actually, let’s do something easier and just as good: let’s replace <semantics>[0,1]<annotation encoding="application/x-tex">[0,1]</annotation></semantics> by Cantor space <semantics>2 <annotation encoding="application/x-tex">2^{\mathbb{N}}</annotation></semantics>. It’s reasonable to guess that this should work, since the essential property of <semantics>[0,1]<annotation encoding="application/x-tex">[0,1]</annotation></semantics> being used in the above construction is that it can be decomposed into two pieces (namely <semantics>[0,12]<annotation encoding="application/x-tex">[0,\frac{1}{2}]</annotation></semantics> and <semantics>[12,1]<annotation encoding="application/x-tex">[\frac{1}{2},1]</annotation></semantics>) which are both equivalent to itself, and <semantics>2 <annotation encoding="application/x-tex">2^{\mathbb{N}}</annotation></semantics> has this property as well:

<semantics>2 2 +12 ×2 12 +2 .<annotation encoding="application/x-tex">2^{\mathbb{N}} \cong 2^{\mathbb{N}+1} \cong 2^{\mathbb{N}} \times 2^1 \cong 2^{\mathbb{N}} + 2^{\mathbb{N}}.</annotation></semantics>

Moreover, <semantics>2 <annotation encoding="application/x-tex">2^{\mathbb{N}}</annotation></semantics> has the advantage that this decomposition is disjoint, i.e. a coproduct. Thus, we can also get rid of the assumption that our automorphisms preserve endpoints, which was just there in order to allow us to glue two different automorphisms on the two copies in the decomposition.

Therefore, our goal is now to construct an endomap of <semantics>BAut(2 )<annotation encoding="application/x-tex">B Aut(2^{\mathbb{N}})</annotation></semantics> which is incoherently, but not coherently, idempotent. As discussed above, the elements of <semantics>BAut(2 )<annotation encoding="application/x-tex">B Aut(2^{\mathbb{N}})</annotation></semantics> are spaces that are equivalent to <semantics>2 <annotation encoding="application/x-tex">2^{\mathbb{N}}</annotation></semantics>, but without any such specified equivalence. Looking at the definition of Lurie’s <semantics>λ<annotation encoding="application/x-tex">\lambda</annotation></semantics>, we can see that intuitively, what it does is shrink the interval to half of itself, acting functorially, and add a new copy of the interval at the end. Thus, it’s reasonable to define <semantics>e:BAut(2 )BAut(2 )<annotation encoding="application/x-tex">e:B Aut(2^{\mathbb{N}}) \to B Aut(2^{\mathbb{N}})</annotation></semantics> by

<semantics>e(Z)=Z+2 .<annotation encoding="application/x-tex">e(Z) = Z + 2^{\mathbb{N}}.</annotation></semantics>

Here <semantics>Z<annotation encoding="application/x-tex">Z</annotation></semantics> is some space equivalent to <semantics>2 <annotation encoding="application/x-tex">2^{\mathbb{N}}</annotation></semantics>, and in order for this map to be well-defined, we need to show is that if <semantics>Z<annotation encoding="application/x-tex">Z</annotation></semantics> is equivalent to <semantics>2 <annotation encoding="application/x-tex">2^{\mathbb{N}}</annotation></semantics>, then so is <semantics>Z+2 <annotation encoding="application/x-tex">Z + 2^{\mathbb{N}}</annotation></semantics>. However, the decomposition <semantics>2 2 +2 <annotation encoding="application/x-tex">2^{\mathbb{N}} \cong 2^{\mathbb{N}} + 2^{\mathbb{N}}</annotation></semantics> ensures this. Moreover, since our definition didn’t involve making any unnatural choices, it’s “obviously” (and in HoTT, automatically) functorial.

Now, is <semantics>e<annotation encoding="application/x-tex">e</annotation></semantics> incoherently-idempotent, i.e. do we have <semantics>e(e(Z))e(Z)<annotation encoding="application/x-tex">e(e(Z))\cong e(Z)</annotation></semantics>? Well, that is just asking whether

<semantics>(Z+2 )+2 is equivalent toZ+2 <annotation encoding="application/x-tex"> (Z + 2^{\mathbb{N}}) + 2^{\mathbb{N}} \quad\text{is equivalent to}\quad Z + 2^{\mathbb{N}} </annotation></semantics>

but this again follows from <semantics>2 2 +2 <annotation encoding="application/x-tex">2^{\mathbb{N}} \cong 2^{\mathbb{N}} + 2^{\mathbb{N}}</annotation></semantics>! Showing that <semantics>e<annotation encoding="application/x-tex">e</annotation></semantics> is not coherent is a bit harder, but still fairly straightforward using our description; I’ll leave it as an exercise, or you can try to decipher the Coq code.

Example 3: Natural pointed sets

Let’s end by considering the following question: in what cases does the natural map <semantics>BS n1BS n<annotation encoding="application/x-tex">B S_{n-1} \to B S_{n}</annotation></semantics> have a retraction, where <semantics>S n<annotation encoding="application/x-tex">S_n</annotation></semantics> is the symmetric group on <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics> elements? Looking at homotopy groups, this would imply that <semantics>S n1S n<annotation encoding="application/x-tex">S_{n-1} \hookrightarrow S_n</annotation></semantics> has a retraction, which is true for <semantics>n<5<annotation encoding="application/x-tex">n\lt 5</annotation></semantics> but not otherwise. But let’s look instead at the map on classifying spaces.

The obvious way to think about this map is to identify <semantics>BS n<annotation encoding="application/x-tex">B S_n</annotation></semantics> with <semantics>BAut(n)<annotation encoding="application/x-tex">B Aut(\mathbf{n})</annotation></semantics>, where <semantics>n<annotation encoding="application/x-tex">\mathbf{n}</annotation></semantics> is the discrete set with <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics> elements, and similarly <semantics>BS n1<annotation encoding="application/x-tex">B S_{n-1}</annotation></semantics> with <semantics>BAut(n1)<annotation encoding="application/x-tex">B Aut(\mathbf{n-1})</annotation></semantics>. Then an element of <semantics>BAut(n1)<annotation encoding="application/x-tex">B Aut(\mathbf{n-1})</annotation></semantics> is a set <semantics>Z<annotation encoding="application/x-tex">Z</annotation></semantics> with <semantics>n1<annotation encoding="application/x-tex">n-1</annotation></semantics> elements, and the map <semantics>BS n1BS n<annotation encoding="application/x-tex">B S_{n-1} \to B S_{n}</annotation></semantics> takes it to <semantics>Z+1<annotation encoding="application/x-tex">Z+1</annotation></semantics> which has <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics> elements.

However, another possibility is to identify <semantics>BS n1<annotation encoding="application/x-tex">B S_{n-1}</annotation></semantics> instead with the classifying space of pointed sets with <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics> elements. Since an isomorphism of pointed sets must respect the basepoint, this gives an equivalent groupoid, and now the map <semantics>BS n1BS n<annotation encoding="application/x-tex">B S_{n-1} \to B S_{n}</annotation></semantics> is just forgetting the basepoint. With this identification, a putative retraction <semantics>BS nBS n1<annotation encoding="application/x-tex">B S_{n} \to B S_{n-1}</annotation></semantics> would assign, to any set <semantics>Z<annotation encoding="application/x-tex">Z</annotation></semantics> with <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics> elements, a pointed set <semantics>(r(Z),r 0)<annotation encoding="application/x-tex">(r(Z),r_0)</annotation></semantics> with <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics> elements. Note that the underlying set <semantics>r(Z)<annotation encoding="application/x-tex">r(Z)</annotation></semantics> need not be <semantics>Z<annotation encoding="application/x-tex">Z</annotation></semantics> itself; they will of course be isomorphic (since both have <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics> elements), but there is no specified or natural isomorphism. However, to say that <semantics>r<annotation encoding="application/x-tex">r</annotation></semantics> is a retraction of our given map says that if <semantics>Z<annotation encoding="application/x-tex">Z</annotation></semantics> started out pointed, then <semantics>(r(Z),r 0)<annotation encoding="application/x-tex">(r(Z),r_0)</annotation></semantics> is isomorphic to <semantics>(Z,z 0)<annotation encoding="application/x-tex">(Z,z_0)</annotation></semantics> as pointed sets.

Let’s do some small examples. When <semantics>n=1<annotation encoding="application/x-tex">n=1</annotation></semantics>, our map <semantics>r<annotation encoding="application/x-tex">r</annotation></semantics> has to take a set with 1 element and assign to it a pointed set with 1 element. There’s obviously a unique way to do that, and just as obviously if we started out with a pointed set we get the same set back again.

The case <semantics>n=2<annotation encoding="application/x-tex">n=2</annotation></semantics> is a bit more interesting: our map <semantics>r<annotation encoding="application/x-tex">r</annotation></semantics> has to take a set <semantics>Z<annotation encoding="application/x-tex">Z</annotation></semantics> with 2 elements and assign to it a pointed set with 2 elements. One option, of course, is to define <semantics>r(Z)=2<annotation encoding="application/x-tex">r(Z)=2</annotation></semantics> for all <semantics>Z<annotation encoding="application/x-tex">Z</annotation></semantics>. Since every pointed 2-element set is uniquely isomorphic to every other, this satisfies the requirement. Another option motivated by example 1, which is perhaps a little more satisfying, would be to define <semantics>r(Z)=Iso(Z,Z)<annotation encoding="application/x-tex">r(Z) = Iso(Z,Z)</annotation></semantics>, which is pointed by the identity.

The case <semantics>n=3<annotation encoding="application/x-tex">n=3</annotation></semantics> is more interesting still, since now it is not true that any two pointed 3-element sets are naturally isomorphic. Given a 3-element set <semantics>Z<annotation encoding="application/x-tex">Z</annotation></semantics>, how do we assign to it functorially a pointed 3-element set? The best way I’ve thought of is to let <semantics>r(Z)<annotation encoding="application/x-tex">r(Z)</annotation></semantics> be the set of automorphisms <semantics>fIso(Z,Z)<annotation encoding="application/x-tex">f\in Iso(Z,Z)</annotation></semantics> such that <semantics>f 3=id<annotation encoding="application/x-tex">f^3 = id</annotation></semantics>. This has 3 elements, the identity and two 3-cycles, and we can take the identity as a basepoint. And if <semantics>Z<annotation encoding="application/x-tex">Z</annotation></semantics> came with a point <semantics>z 0<annotation encoding="application/x-tex">z_0</annotation></semantics>, then we can define an isomorphism <semantics>Zr(Z)<annotation encoding="application/x-tex">Z \cong r(Z)</annotation></semantics> by sending <semantics>zZ<annotation encoding="application/x-tex">z\in Z</annotation></semantics> to the unique <semantics>fr(Z)<annotation encoding="application/x-tex">f\in r(Z)</annotation></semantics> having the property that <semantics>f(z 0)=z<annotation encoding="application/x-tex">f(z_0)= z</annotation></semantics>.

The case <semantics>n=4<annotation encoding="application/x-tex">n=4</annotation></semantics> is somewhat similar: given a 4-element set <semantics>Z<annotation encoding="application/x-tex">Z</annotation></semantics>, define <semantics>r(Z)<annotation encoding="application/x-tex">r(Z)</annotation></semantics> to be the set of automorphisms <semantics>fIso(Z,Z)<annotation encoding="application/x-tex">f\in Iso(Z,Z)</annotation></semantics> such that <semantics>f 2=id<annotation encoding="application/x-tex">f^2 = id</annotation></semantics> and whose set of fixed points is either empty or all of <semantics>Z<annotation encoding="application/x-tex">Z</annotation></semantics>. This has 4 elements and is pointed by the identity; in fact, it is the permutation representation of the Klein four-group. And once again, if <semantics>Z<annotation encoding="application/x-tex">Z</annotation></semantics> came with a point <semantics>z 0<annotation encoding="application/x-tex">z_0</annotation></semantics>, we can define <semantics>Zr(Z)<annotation encoding="application/x-tex">Z \cong r(Z)</annotation></semantics> by sending <semantics>zZ<annotation encoding="application/x-tex">z\in Z</annotation></semantics> to the unique <semantics>fr(Z)<annotation encoding="application/x-tex">f\in r(Z)</annotation></semantics> such that <semantics>f(z 0)=z<annotation encoding="application/x-tex">f(z_0)= z</annotation></semantics>.

I will end with a question that I don’t know the answer to: is there any way to see from this perspective on classifying spaces that such a retraction doesn’t exist in the case <semantics>n=5<annotation encoding="application/x-tex">n=5</annotation></semantics>?

by shulman (viritrilbia@gmail.com) at January 20, 2015 11:23 PM

Symmetrybreaking - Fermilab/SLAC

How to build your own particle detector

Make a cloud chamber and watch fundamental particles zip through your living room!

The scale of the detectors at the Large Hadron Collider is almost incomprehensible: They weigh thousands of tons, contain millions of detecting elements and support a research program for an international community of thousands of scientists.

But particle detectors aren’t always so complicated. In fact, some particle detectors are so simple that you can make (and operate) them in your own home.

The Continuously Sensitive Diffusion Cloud Chamber is one such detector. Originally developed at UC Berkeley in 1938, this type of detector uses evaporated alcohol to make a ‘cloud’ that is extremely sensitive to passing particles.

Cosmic rays are particles that are constantly crashing into the Earth from space. When they hit Earth’s atmosphere, they release a shower of less massive particles, many of which invisibly rain down to us.

When a cosmic ray zips through a cloud, it creates ghostly particle tracks that are visible to the naked eye.

Building a cloud chamber is easy and requires only a few simple materials and steps:

Materials:

  • Clear plastic or glass tub (such as a fish tank) with a solid lid (plastic or metal)
  • Felt
  • Isopropyl alcohol (90% or more. You can find this at a pharmacy or special order from a chemical supply company. Wear safety goggles when handling the alcohol.)
  • Dry ice (frozen carbon dioxide. Often used at fish markets and grocery stores to keep products cool. Wear thick gloves when handling the dry ice.)

Steps:

  1. Cut the felt so that it is the size of the bottom of the fish tank. Glue it down inside the tank (on the bottom where the sand and fake treasure chests would normally go).
  2. Once the felt is secured, soak it in the isopropyl alcohol until it is saturated. Drain off any excess alcohol.
  3. Place the lid on top of dry ice so that it lies flat. You might want to have the dry ice in a container or box so that it is more stable.
  4. Flip the tank upside down, so that the felt-covered bottom of the tank is on top, and place the mouth of the tank on top of the lid.
  5. Wait about 10 minutes… then turn off the lights and shine a flashlight into your tank.
Artwork by: Sandbox Studio, Chicago

What is happening inside your cloud chamber?

The alcohol absorbed by the felt is at room temperature and is slowly evaporating into the air. But as the evaporated alcohol sinks toward the dry ice, it cools down and wants to turn back into a liquid.

The air near the bottom of the tank is now supersaturated, which means that it is just below its atmospheric dew point. And just as water molecules cling to blades of grass on cool autumn mornings, the atmospheric alcohol will form cloud-like droplets on anything it can cling to.

Particles, coming through!

When a particle zips through your cloud chamber, it bumps into atmospheric molecules and knocks off some of their electrons, turning the molecules into charged ions. The atmospheric alcohol is attracted to these ions and clings to them, forming tiny droplets.

The resulting tracks left behind look like the contrails of airplane—long spindly lines marking the particle’s path through your cloud chamber.

What you can tell from your tracks?

Many different types of particles might pass through your cloud chamber. It might be hard to see, but you can actually differentiate between the types of particles based on the tracks they leave behind.

Short, fat tracks

Sorry—not a cosmic ray. When you see short, fat tracks, you’re seeing an atmospheric radon atom spitting out an alpha particle (a clump of two protons and two neutrons). Radon is a naturally occurring radioactive element, but it exists in such low concentrations in the air that it is less radioactive than peanut butter. Alpha particles spat out of radon atoms are bulky and low-energy, so they leave short, fat tracks.

Long, straight track

Congratulations! You’ve got muons! Muons are the heavier cousins of the electron and are produced when a cosmic ray bumps into an atmospheric molecule high up in the atmosphere. Because they are so massive, muons bludgeon their way through the air and leave clean, straight tracks.

Zig-zags and curly-cues

If your track looks like the path of a lost tourist in a foreign city, you’re looking at an electron or positron (the electron’s anti-matter twin). Electrons and positrons are created when a cosmic ray crashes into atmospheric molecules. Electrons and positrons are light particles and bounce around when they hit air molecules, leaving zig-zags and curly-cues.

Forked tracks

If your track splits, congratulations! You just saw a particle decay. Many particles are unstable and will decay into more stable particles. If your track suddenly forks, you are seeing physics in action!

Like what you see? Sign up for a free subscription to symmetry!

    by Sarah Charley at January 20, 2015 07:44 PM

    ZapperZ - Physics and Physicists

    Macrorealism Violated By Cs Atoms
    It is another example where the more they test QM, the more convincing it becomes.

    This latest experiment is to test whether superposition truly exist via a very stringent test and applying the Leggett-Garg criteria.

    In comparison with these earlier experiments, the atoms studied in the experiments by Robens et al.’s are the largest quantum objects with which the Leggett-Garg inequality has been tested using what is called a null measurement—a “noninvasive” measurement that allows the inequality to be confirmed in the most convincing way possible. In the researchers’ experiment, a cesium atom moves in one of two standing optical waves that have opposite electric-field polarizations, and the atom’s position is measured at various times. The two standing waves can be pictured as a tiny pair of overlapping one-dimensional egg-carton strips—one red, one blue (Fig. 1). The experiment consists of measuring correlation between the atom’s position at different times. Robens et al. first put the atom into a superposition of two internal hyperfine spin states; this corresponds to being in both cartons simultaneously. Next, the team slid the two optical waves past each other, which causes the atom to smear out over a distance of up to about 2 micrometers in a motion known as a quantum walk. Finally, the authors optically excited the atom, causing it to fluoresce and reveal its location at a single site. Knowing where the atom began allows them to calculate, on average, whether the atom moved left or right from its starting position. By repeating this experiment, they can obtain correlations between the atom’s position at different times, which are the inputs into the Leggett-Garg inequality.

    You may read the result they got in the report. Also note that you also get free access to the actual paper.

    But don't miss the importance of this work, as stated in this review.


    Almost a century after the quantum revolution in science, it’s perhaps surprising that physicists are still trying to prove the existence of superpositions. The real motivation lies in the future of theoretical physics. Fledgling theories of macrorealism may well form the basis of the next generation “upgrade” to quantum theory by setting the scale of the quantum-classical boundary. Thanks to the results of this experiment, we can be sure that the boundary cannot lie below the scale at which the cesium atom has been shown to behave like a wave. How high is this scale? A theoretical measure of macroscopicity [8] (see 18 April 2013 Synopsis) gives the cesium atom a modest ranking of 6.8, above the only other object tested with null measurements [5], but far below where most suspect the boundary lies. (Schrödinger’s cat is a 57.) In fact, matter-wave interferometry experiments have already shown interference fringes with Buckminsterfullerene molecules [9], boasting a rating as high as 12. In my opinion, however, we can be surer of the demonstration of the quantumness of the cesium atom because of the authors’ exclusion of macrorealism via null result measurements. The next step is to try these experiments with atoms of larger mass, superposed over longer time scales and separated by greater distances. This will push the envelope of macroscopicity further and reveal yet more about the nature of the relationship between the quantum and the macroworld.


    Zz.

    by ZapperZ (noreply@blogger.com) at January 20, 2015 06:31 PM

    astrobites - astro-ph reader's digest

    Are There Age Spreads in Intermediate Age Clusters?
    Title: The Morphology of the Sub-Giant Branch and the Red Clump Reveal No Sign of Age Spreads in Intermediate Age Clusters

    Authors: N. Bastian and F. Niederhofer

    First Author’s Institution: Astrophysics Research Institute, Liverpool John Moores University

    Status: Accepted by MNRAS

    Background

    Can star clusters be host to multiple events of star formation? Data from Hubble Space Telescope seems to suggest that this is the case.  While we often assume that all the stars in a stellar cluster have the same age, astronomers have recently found that a majority of intermediate age clusters in the Large and Small Magellanic Clouds display evidence of a spread in the age of their constituent stars. To see where this comes from, let’s take a step back to look at some properties of stars.

    Figure 1 from the paper: a color-magnitude diagram of NGC 1806, showing the sub-giant stars (in red) clustering around the isochrone consistent with 1.41 Gyr. The narrowness of the sub-giant branch suggests that the stars do not have an age spread.

    Figure 1 from the paper: a color-magnitude diagram of NGC 1806, showing the sub-giant stars (in red) clustering around the isochrone consistent with 1.41 Gyr. The narrowness of the sub-giant branch suggests that the stars do not have an age spread.

    Stars aren’t born with just any luminosity or color. If you were to plot stellar color against luminosity (what is known as a Hertzsprung-Russell Diagram or a color-magnitude diagram), you’d find a band where the newly-formed stars tend to lie, known as the main sequence. As the stars progress through their lives, they eventually evolve away from the main sequence, with the most massive stars spending the least amount of time on the main sequence and the least massive stars spending the most time on the main sequence. The point where the star leaves the main sequence is known as the main sequence turnoff. While it’s very difficult to measure the age of any single star, we can often use the main sequence turnoff to get an estimate of a stellar cluster‘s age.

    How does this work? In general, stars are also not formed in isolation, something that’s been mentioned previously on astrobites. They are usually formed in groups, from massive clouds of molecular gas often hundreds of solar masses in size. As a result, we tend to assume that all the stars in a stellar cluster are roughly the same age (what’s known as a simple stellar population or SSP) and distance from us. We can then estimate the age of the stellar cluster by looking at the age of the stars at the main sequence turnoff; if all the stars in a cluster are about the same age, then the stars leaving the main sequence should also be about the same age, producing a narrow main sequence turnoff. This age would then be the age of the stellar cluster.

    On the other hand, if a stellar cluster contains stars formed from several different star formation events, there would be a spread in the age of stars at the main sequence turnoff, known as an extended main sequence turnoff, or eMSTO. The presence of eMSTOs in many intermediate age clusters has caused astronomers to suggest that these clusters have been host to many star formation events. This problem was previously discussed in this astrobite, which focused on the star cluster NGC 1651.

    However, when the authors of the previous astrobite paper investigated NGC 1651, they found that other features indicative of multiple generations of star formation–like a wide sub-giant branch–were not present. This led them to conclude that the extended main sequence turnoff might not be an indication of a spread in age after all. But is this true for other intermediate-age clusters?

    NGC 1806 & NGC 1846

    The authors of today’s paper have looked at the intermediate-age stellar clusters NGC 1806 and NGC 1846, both located in the Large Magellanic Cloud, to see if they can find an age dispersion in other parts of the color-magnitude diagram. In particular, they focus on the width of the sub-giant branch and the red clump. These clusters are estimated to have age spreads of about 200-600 Myr largely based on their MSTO regions.

    They began by making color-magnitude diagrams of the stars in both clusters. The one for NGC 1806 is shown in Figure 1. The blue lines running through the diagram are isochrones, curves that represent the location of stars with the same age. The sub-giant branch stars, in red, are clustered around one isochrone, which seem to indicate that there isn’t a spread in their ages.

    Figure 2: Figure 3 from the paper. The black histogram indicates the magnitude difference between the observed sub-giant stars and the synthetic sub-giant stars in their 1.44 Gyr isochrone model. The red distribution shows the expected distribution for their simple stellar population model convolved with the observational errors. As we can see, this doesn't explain the tail ends of the black histogram. On the other hand, the blue distribution, which is the synthetic stellar population model with an age spread that best fits the MSTO region, misses the core of the black histogram.

    Figure 2: Figure 3 from the paper. The black histogram indicates the magnitude difference between the observed sub-giant stars and the synthetic sub-giant stars in their 1.44 Gyr isochrone model. The red distribution shows the expected distribution for their simple stellar population model convolved with the observational errors. As we can see, this doesn’t explain the tail ends of the black histogram. On the other hand, the blue distribution, which is the synthetic stellar population model with an age spread that best fits the MSTO region, misses the core of the black histogram.

    The authors then created two synthetic color-magnitude diagrams, one showing the distribution if the eMSTO was caused by an age dispersion, and the other for a simple stellar population with an age of 1.44 Gyr (their best fit to the sub-giant branch). To find the difference between their observations and simulations, they subtracted the expected difference in magnitude for stars in the observed sub-giant branch 1.44 Gyr isochrone. This is shown as the black histogram in Figure 2. After estimating the observational errors, they convolve these with a distribution of stars coming from a simple stellar population (in red). Finally, these are compared with the distribution expected from stars that formed over an extended star formation history (in blue). The 1.44 Gyr isochrone is able to reproduce the peak of the histogram, but fails to catch the tail ends, something the authors acknowledge is therefore unlikely to be caused by errors in our photometry (our measurement of the flux). On the other hand, the model that used a stellar population with an age spread fails to account for the peak of the histogram, suggesting an inconsistency between the sub-giant branch and a large age spread. When they analyze the red clump, they also find the stars are clustered around one isochrone.

    The authors repeat their analysis for NGC 1846 and obtain similar results for the other cluster, causing them to conclude that eMSTOs are not caused by an age spread.

    Maybe Not

    Their results are consistent with the assumption that stellar clusters do not exhibit a range of stellar age and are also consistent with a number of findings in the literature that support a lack of age spreads in stellar clusters. Instead the authors point towards stellar rotation or interacting binaries as two possible causes of the eMSTOs. Stellar rotation can change the structure of the star and its inclination angle to an observer, causing it to have a different effective temperature and therefore, color. However, we still don’t fully understand the effects of rotation on the sub-giant branch of the color-magnitude diagram, so it is still possible that this explanation, while consistent with the eMSTO region, could be at conflict with other parts of the color-magnitude diagram. Another possible explanation they offer is the presence of unresolved binaries; if we have unresolved binary stars, then we would be recording the flux of two stars rather than just one. These binaries are not expected to cause significant broadening in either the sub-giant branch or the red clump, but also may not fully explain the eMSTO. The authors also acknowledge and encourage alternative explanations for the eMSTOs as well.

    Despite the growing evidence for an alternative explanation to age spreads being the cause of the eMSTOs we observe, it’s probably too soon for us to conclude definitively either way in this debate. At the very least, these results indicate that we still need to to study eMSTOs and their possible causes in greater detail.

    by Caroline Huang at January 20, 2015 12:44 PM

    Lubos Motl - string vacua and pheno

    Prof Collins explains string theory
    Prof Emeritus Walter Lewin has been an excellent physics instructor who loved to include truly physical demonstrations of certain principles, laws, and concepts.



    After you understand string theory, don't forget about inertia, either. ;-)

    When the SJWs fired him and tried to erase him from the history of the Universe, a vacuum was created at MIT.




    The sensible people at MIT were thinking about a way to fill this vacuum. After many meetings, the committee decided to hire a new string theory professor who is especially good at teaching, someone like Barton Zwiebach #2 but someone who can achieve an even more intimate contact with the students.




    At the end, it became clear that they had to hire Prof Collins and her mandatory physics class on string theory is shown above. It is not too demanding even though e.g. the readers of texts by Mr Smolin or Mr Woit – or these not so Gentlemen themselves – may still find the material too technical.

    But the rest will surely enjoy it. ;-)



    Someone could think that this affiliation with MIT is just a joke but I assure you that Dr Paige Hopewell from the Bikini Calculus lecture above has been an excellent nuclear physicist affiliated with the MIT. While at Purdue, she would win an award in 2007, and so on.

    See also: hot women banned in optics

    by Luboš Motl (noreply@blogger.com) at January 20, 2015 10:35 AM

    Clifford V. Johnson - Asymptotia

    In Print…!
    graphic_novel_event_postcard_picture_ofHere's the postcard they made to advertise the event of tomorrow (Tuesday)*. I'm pleased with how the design worked out, and I'm extra pleased about one important thing. This is the first time that any of my graphical work for the book has been printed professionally in any form on paper, and I am pleased to see that the pdf that I output actually properly gives the colours I've been working with on screen. There's always been this nagging background worry (especially after the struggles I had to do to get the right output from my home printers) that somehow it would all be terribly wrong... that the colours would [...] Click to continue reading this post

    by Clifford at January 20, 2015 02:55 AM

    January 19, 2015

    Jester - Resonaances

    Weekend plot: spin-dependent dark matter
    This weekend plot is borrowed from a nice recent review on dark matter detection:
    It shows experimental limits on the spin-dependent scattering cross section of dark matter on protons. This observable is not where the most spectacular race is happening, but it is important for constraining more exotic models of dark matter. Typically, a scattering cross section in the non-relativistic limit is independent of spin or velocity of the colliding particles. However, there exist reasonable models of dark matter where the low-energy cross section is more complicated. One possibility is that the interaction strength is proportional to the scalar product of spin vectors of a dark matter particle and a nucleon (proton or neutron). This is usually referred to as the spin-dependent scattering, although other kinds of spin-dependent forces that also depend on the relative velocity are possible.

    In all existing direct detection experiments, the target contains nuclei rather than single nucleons. Unlike in the spin-independent case, for spin-dependent scattering the cross section is not enhanced by coherent scattering over many nucleons. Instead, the interaction strength is proportional to the expectation values of the proton and neutron spin operators in the nucleus.  One can, very roughly, think of this process as a scattering on an odd unpaired nucleon. For this reason, xenon target experiments such as Xenon100 or LUX are less sensitive to the spin-dependent scattering on protons because xenon nuclei have an even number of protons.  In this case,  experiments that contain fluorine in their target molecules have the best sensitivity. This is the case of the COUPP, Picasso, and SIMPLE experiments, who currently set the strongest limit on the spin-dependent scattering cross section of dark matter on protons. Still, in absolute numbers, the limits are many orders of magnitude weaker than in the spin-independent case, where LUX has crossed the 10^-45 cm^2 line. The IceCube experiment can set stronger limits in some cases by measuring the high-energy neutrino flux from the Sun. But these limits depend on what dark matter annihilates into, therefore they are much more model-dependent than the direct detection limits.

    by Jester (noreply@blogger.com) at January 19, 2015 05:56 PM

    ZapperZ - Physics and Physicists

    I Win The Nobel Prize And All I Got Was A Parking Space
    I'm sure it is a slight exaggeration, but it is still amusing to read Shuji Nakamura's response on the benefits he got from UCSB after winning the physics Nobel Prize. On the benefits of winning a Nobel Prize:

     "I don't have to teach anymore and I get a parking space. That's all I got from the University of California." 

     Zz.

    by ZapperZ (noreply@blogger.com) at January 19, 2015 04:36 PM

    Georg von Hippel - Life on the lattice

    Scientific Program "Fundamental Parameters of the Standard Model from Lattice QCD"
    Recent years have seen a significant increase in the overall accuracy of lattice QCD calculations of various hadronic observables. Results for quark and hadron masses, decay constants, form factors, the strong coupling constant and many other quantities are becoming increasingly important for testing the validity of the Standard Model. Prominent examples include calculations of Standard Model parameters, such as quark masses and the strong coupling constant, as well as the determination of CKM matrix elements, which is based on a variety of input quantities from experiment and theory. In order to make lattice QCD calculations more accessible to the entire particle physics community, several initiatives and working groups have sprung up, which collect the available lattice results and produce global averages.

    We are therefore happy to announce the scientific program "Fundamental Parameters of the Standard Model from Lattice QCD" to be held from August 31 to September 11, 2015 at the Mainz Institute for Theoretical Physics (MITP) at Johannes Gutenberg University Mainz, Germany.

    This scientific programme is designed to bring together lattice practitioners with members of the phenomenological and experimental communities who are using lattice estimates as input for phenomenological studies. In addition to sharing the expertise among several communities, the aim of the programme is to identify key quantities which allow for tests of the CKM paradigm with greater accuracy and to discuss the procedures in order to arrive at more reliable global estimates.

    We would like to invite you to consider attending this and to apply through our website. After the deadline (March 31, 2015), an admissions committee will evaluate all the applications.

    Among other benefits. MITP offers all its participants office space and access to computing facilities during their stay. In addition, MITP will cover local housing expenses for accepted participants. The MITP team will arrange the accommodation individually and also book the accommodation for accepted participants.

    Please do not hesitate to contact us at coordinator@mitp.uni-mainz.de if you have any questions.

    We hope you will be able to join us in Mainz in 2015!

    With best regards,

    the organizers:
    Gilberto Colangelo, Georg von Hippel, Heiko Lacker, Hartmut Wittig

    by Georg v. Hippel (noreply@blogger.com) at January 19, 2015 04:22 PM

    Georg von Hippel - Life on the lattice

    Upcoming conference/workshop deadlines
    This is just a short reminder of some upcoming deadlines for conferences/workshops in the organization of which I am in some way involved.

    Abstract submission for QNP 2015 closes on 6th February 2015, and registration closes on 27th February 2015. Visit this link to submit and abstract, and this link to register.

    Applications for the Scientific Programme "Fundamental Parameters from Lattice QCD" at MITP close on 31st March 2015. Visit this link to apply.

    by Georg v. Hippel (noreply@blogger.com) at January 19, 2015 04:20 PM

    CERN Bulletin

    Daniel Brandt (1950-2014)

    Nous avons le profond regret d’annoncer le décès de Monsieur Daniel BRANDT survenu le 14 décembre 2014.

     

     

    Monsieur Daniel BRANDT, né le 21 janvier 1950, travaillait à l’Unité DG et était au CERN depuis le 1er mai 1981.

    Le Directeur général a envoyé un message de condoléances à sa famille de la part du personnel du CERN. 

     

    Affaires sociales
    Département des Ressources humaines

     

    January 19, 2015 03:01 PM

    arXiv blog

    Turning PacMan Into A Street-Based Chase Game Using Smartphones

    Computer scientists have developed a set of Android-based tools that turn games like PacMan into street-based chase games.


    Anyone who grew up in the 1980s will be familiar with PacMan, the arcade game in which players use a joystick to guide a tiny yellow character through a two-dimensional maze. As it moves, the character must chomp its way through golden coins while avoiding being killed by ghosts who also sweep through the maze.

    January 19, 2015 03:01 PM

    Clifford V. Johnson - Asymptotia

    Experiments with Colour
    Well, that was interesting! I got a hankering to experiment with pastels the other day. I am not sure why. Then I remembered that I had a similar urge some years ago but had not got past the phase of actually investing in a few bits of equipment. So I dug them out and found a bit of time to experiment. pastel_experiment_18_jan_2015_small It is not a medium I've really done anything in before and I have a feeling it is a good additional way of exploring technique, and feeling out colour design for parts of the book later on. Who knows? Anyway, all I know is that without my [...] Click to continue reading this post

    by Clifford at January 19, 2015 02:45 PM

    January 18, 2015

    Clifford V. Johnson - Asymptotia

    LAIH Luncheon – Ramiro Gomez
    Yesterday's Luncheon at the Los Angeles Institute for the Humanities, the first of the year, was another excellent one (even though it was a bit more compact than I'd have liked). We caught up with each other and discussed what's been happening with over the holiday season, and then had the artist Ramiro Gomez give a fantastic talk ("Luxury, Interrupted: Art Interventions for Social Change") about his work in highlighting the hidden people of Los Angeles - those cleaners, caregivers, gardeners and others who help make the city tick along, but who are treated as invisible by most. LAIH_Ramiro_Gomez_16th_Jan_2015 As someone who very regularly gets totally ignored (like I'm not even there!) while standing in front of my own house by many people in my neighbourhood who [...] Click to continue reading this post

    by Clifford at January 18, 2015 05:29 PM

    Quantum Diaries

    The Ties That Bind
    Cleaning the ATLAS Experiment

    Beneath the ATLAS detector – note the well-placed cable ties. IMAGE: Claudia Marcelloni, ATLAS Experiment © 2014 CERN.

    A few weeks ago, I found myself in one of the most beautiful places on earth: wedged between a metallic cable tray and a row of dusty cooling pipes at the bottom of Sector 13 of the ATLAS Detector at CERN. My wrists were scratched from hard plastic cable ties, I had an industrial vacuum strapped to my back, and my only light came from a battery powered LED fastened to the front of my helmet. It was beautiful.

    The ATLAS Detector is one of the largest, most complex scientific instruments ever constructed. It is 46 meters long, 26 meters high, and sits 80 metres underground, completely surrounding one of four points on the Large Hadron Collider (LHC), where proton beams are brought together to collide at high energies.  It is designed to capture remnants of the collisions, which appear in the form of particle tracks and energy deposits in its active components. Information from these remnants allows us to reconstruct properties of the collisions and, in doing so, to improve our understanding of the basic building blocks and forces of nature.

    On that particular day, a few dozen of my colleagues and I were weaving our way through the detector, removing dirt and stray objects that had accumulated during the previous two years. The LHC had been shut down during that time, in order to upgrade the accelerator and prepare its detectors for proton collisions at higher energy. ATLAS is constructed around a set of very large, powerful magnets, designed to curve charged particles coming from the collisions, allowing us to precisely measure their momenta. Any metallic objects left in the detector risk turning into fast-moving projectiles when the magnets are powered up, so it was important for us to do a good job.

    ATLAS Big Wheel

    ATLAS is divided into 16 phi sectors with #13 at the bottom. IMAGE: Steven Goldfarb, ATLAS Experiment © 2014 CERN

    The significance of the task, however, did not prevent my eyes from taking in the wonder of the beauty around me. ATLAS is shaped somewhat like a large barrel. For reference in construction, software, and physics analysis, we divide the angle around the beam axis, phi, into 16 sectors. Sector 13 is the lucky sector at the very bottom of the detector, which is where I found myself that morning. And I was right at ground zero, directly under the point of collision.

    To get to that spot, I had to pass through a myriad of detector hardware, electronics, cables, and cooling pipes. One of the most striking aspects of the scenery is the ironic juxtaposition of construction-grade machinery, including built-in ladders and scaffolding, with delicate, highly sensitive detector components, some of which make positional measurements to micron (thousandth of a millimetre) precision. All of this is held in place by kilometres of cable trays, fixings, and what appear to be millions of plastic (sometimes sharp) cable ties.

    Inside the ATLAS Detector

    Scaffolding and ladder mounted inside the precision muon spectrometer. IMAGE: Steven Goldfarb, ATLAS Experiment © 2014 CERN.

    The real beauty lies not in the parts themselves, but rather in the magnificent stories of international cooperation and collaboration that they tell. The cable tie that scratched my wrist secures a cable that was installed by an Iranian student from a Canadian university. Its purpose is to carry data from electronics designed in Germany, attached to a detector built in the USA and installed by a Russian technician.  On the other end, a Japanese readout system brings the data to a trigger designed in Australia, following the plans of a Moroccan scientist. The filtered data is processed by software written in Sweden following the plans of a French physicist at a Dutch laboratory, and then distributed by grid middleware designed by a Brazilian student at CERN. This allows the data to be analyzed by a Chinese physicist in Argentina working in a group chaired by an Israeli researcher and overseen by a British coordinator.  And what about the cable tie?  No idea, but that doesn’t take away from its beauty.

    There are 178 institutions from 38 different countries participating in the ATLAS Experiment, which is only the beginning.  When one considers the international make-up of each of the institutions, it would be safe to claim that well over 100 countries from all corners of the globe are represented in the collaboration.  While this rich diversity is a wonderful story, the real beauty lies in the commonality.

    All of the scientists, with their diverse social, cultural and linguistic backgrounds, share a common goal: a commitment to the success of the experiment. The plastic cable tie might scratch, but it is tight and well placed; its cable is held correctly and the data are delivered, as expected. This enormous, complex enterprise works because the researchers who built it are driven by the essential nature of the mission: to improve our understanding of the world we live in. We share a common dedication to the future, we know it depends on research like this, and we are thrilled to be a part of it.

    ATLAS Collaboration Members

    ATLAS Collaboration members in discussion. What discoveries are in store this year? IMAGE: Claudia Marcelloni, ATLAS Experiment © 2008 CERN.

    This spring, the LHC will restart at an energy level higher than any accelerator has ever achieved before. This will allow the researchers from ATLAS, as well as the thousands of other physicists from partner experiments sharing the accelerator, to explore the fundamental components of our universe in more detail than ever before. These scientists share a common dream of discovery that will manifest itself in the excitement of the coming months. Whether or not that discovery comes this year or some time in the future, Sector 13 of the ATLAS detector reflects all the beauty of that dream.

    by Steven Goldfarb at January 18, 2015 04:42 PM

    January 17, 2015

    Sean Carroll - Preposterous Universe

    We Are All Machines That Think

    My answer to this year’s Edge Question, “What Do You Think About Machines That Think?”


    Active_brainJulien de La Mettrie would be classified as a quintessential New Atheist, except for the fact that there’s not much New about him by now. Writing in eighteenth-century France, La Mettrie was brash in his pronouncements, openly disparaging of his opponents, and boisterously assured in his anti-spiritualist convictions. His most influential work, L’homme machine (Man a Machine), derided the idea of a Cartesian non-material soul. A physician by trade, he argued that the workings and diseases of the mind were best understood as features of the body and brain.

    As we all know, even today La Mettrie’s ideas aren’t universally accepted, but he was largely on the right track. Modern physics has achieved a complete list of the particles and forces that make up all the matter we directly see around us, both living and non-living, with no room left for extra-physical life forces. Neuroscience, a much more challenging field and correspondingly not nearly as far along as physics, has nevertheless made enormous strides in connecting human thoughts and behaviors with specific actions in our brains. When asked for my thoughts about machines that think, I can’t help but reply: Hey, those are my friends you’re talking about. We are all machines that think, and the distinction between different types of machines is eroding.

    We pay a lot of attention these days, with good reason, to “artificial” machines and intelligences — ones constructed by human ingenuity. But the “natural” ones that have evolved through natural selection, like you and me, are still around. And one of the most exciting frontiers in technology and cognition is the increasingly permeable boundary between the two categories.

    Artificial intelligence, unsurprisingly in retrospect, is a much more challenging field than many of its pioneers originally supposed. Human programmers naturally think in terms of a conceptual separation between hardware and software, and imagine that conjuring intelligent behavior is a matter of writing the right code. But evolution makes no such distinction. The neurons in our brains, as well as the bodies through which they interact with the world, function as both hardware and software. Roboticists have found that human-seeming behavior is much easier to model in machines when cognition is embodied. Give that computer some arms, legs, and a face, and it starts acting much more like a person.

    From the other side, neuroscientists and engineers are getting much better at augmenting human cognition, breaking down the barrier between mind and (artificial) machine. We have primitive brain/computer interfaces, offering the hope that paralyzed patients will be able to speak through computers and operate prosthetic limbs directly.

    What’s harder to predict is how connecting human brains with machines and computers will ultimately change the way we actually think. DARPA-sponsored researchers have discovered that the human brain is better than any current computer at quickly analyzing certain kinds of visual data, and developed techniques for extracting the relevant subconscious signals directly from the brain, unmediated by pesky human awareness. Ultimately we’ll want to reverse the process, feeding data (and thoughts) directly to the brain. People, properly augmented, will be able sift through enormous amounts of information, perform mathematical calculations at supercomputer speeds, and visualize virtual directions well beyond our ordinary three dimensions of space.

    Where will the breakdown of the human/machine barrier lead us? Julien de La Mettrie, we are told, died at the young age of 41, after attempting to show off his rigorous constitution by eating an enormous quantity of pheasant pâte with truffles. Even leading intellects of the Enlightenment sometimes behaved irrationally. The way we think and act in the world is changing in profound ways, with the help of computers and the way we connect with them. It will be up to us to use our new capabilities wisely.

    by Sean Carroll at January 17, 2015 07:48 PM

    Tommaso Dorigo - Scientificblogging

    The Hard Life Of The Science Outreach Agent
    This morning I woke up at 6AM, had a shower and breakfast, dressed up, and rushed out in the cold of the fading night to catch a train to Mestre, where my car was parked. From there I drove due north for two hours, to a place in the mountains called Pieve di Cadore. A comfortable ride in normal weather, but this morning the weather was horrible, with an insisting water bombing from above which slowly turned to heavy sleet as I gained altitude. The drive was very unnerving as my car is old and not well equipped for these winter conditions - hydroplaning was frequent. But I made it.

    read more

    by Tommaso Dorigo at January 17, 2015 05:36 PM

    Lubos Motl - string vacua and pheno

    Papers by BICEP2, Keck, and Planck out soon
    ...and other news from the CMB Minnesota conference...
    Off-topic: I won't post a new blog post on the "warmest 2014" measurements and claims. See an updated blog post on RRSS AMSU for a few new comments and a graph on the GISS and NCDC results.
    The Twitter account of Kevork Abazajian of UC Irvine seems to be the most useful public source where you may learn about some of the most important announcements made at a recent CMB+Pol conference in Minnesota (January 14th-16th, 2015).



    Is BICEP2's more powerful successor still seeing the gravitational waves?




    Here are the tweets:

    Charles Lawrence (Planck): Planck ultimate results will be out in weeks. CMB lensing potential was detected to 40σ by Planck. Measurement by Planck of \(1s\to 2s\) H transition from CMB has uncertainties 5.5 times better than the laboratory. Planck is not systematics limited on any angular scale. Future space mission needs 10-20x less noise. Try & find a foreground-free spot for polarization experiments (snark intended)-Planck 857 GHz map.

    100, 143, 217, 353 GHz polarization data won't be released in the 2015 @Planck data release.




    Anthony Chalinor (Planck): Temperature to polarization leakage in upcoming data release is not corrected for, so users beware. Planck finds that adding (light) massive sterile neutrinos does nothing to reduce their tension with the lensing+BAO data.

    Francois Boulanger (Planck): B-mode signal will not be detected without the removal of dust polarization from Planck with high accuracy and confidence. Dust SED does not vary strongly across the sky, which was surprising.

    Matthieu Tristram (Planck): Planck finds no region where the dust polarization can be neglected compared to primordial B-modes. (LM: This seems like a sloppy blanket statement to me: whether one is negligible depends on \(\ell\), doesn't it?)

    Sabino Matarrese (Planck): Starobinsky \(\varphi^2\) & exponential inflationary potential are most favored by the Planck primordial power spectrum reconstructions. No evidence of a primordial isocurvature non-Gaussianity is seen in Planck 2015. \(f_{NL} \sim 0.01\) non-Gaussianity of standard inflation will take LSS (halo bias & bispectrum) + 21 cm + CMB.

    Matias Zaldarriaga (theorist): if high \(r\) is detected, then something other that \(N\) \(e\)-folds is setting the inflationary dynamics. He is effectively giving \(r\lt 0.01\) as a theory-favored upper limit from inflation on the tensor amplitude.

    Abazajian @Kevaba: cosmology has the highest experimental sensitivity to neutrino mass and is forecast to maintain that position.

    Lorenzo Sorbo (theorist): non-boring tensors! Parity violation is detectable at 9σ. Parity violations can produce a differing amount of left- and right-handed gravitons, and produce non-zero TB and EB modes. Cosmological matter power spectrum gives neutrino mass constraints because neutrinos transition from radiation like to matter like. Shape and amplitude of power spectrum gives a handle on the neutrino mass. @Planck gives \(0.3\eV\) limits, the oscillation scale. \(dP_k/P_k\sim 1\%\) levels on matter power spec gives \(20\meV\) constraints on neutrino masses. CMB-S4 experiments alone should be able to get down to the \(34\meV\) level, \(15\meV\) level with BAO measurements.

    Olivier Doré (SPHEREx): SPHEREx mission for all-sky spectra for every 6.2" pixels to \(R=40\) in NIR. Quite a legacy! SPHEREx will detect with high significance single-field inflation non-Gaussianity. SPHEREx will detect *every* quasar in the Universe, approximately 1.5 million. SPHEREx astroph: 1412.4872.

    The @SPTelescope polarization main survey patch of 500 square degrees is currently underway.

    Bradford Benson (SPTpol): preliminary results presentation of SPTpol BB modes detection with 5σ level of lensing scale modes. SPT-3G will have 16,000 3-band multichroic pixels with 3 720 mm 4K alumina lenses w/ 3x FOV. SPT-3G will have 150σ detection of lensing B modes & forecast \(\sigma(N_{eff})=0.06\).

    Suzanne Staggs (ACTpol): ACTpol has detected CMB lensing B modes at 4.5σ. neutrinos & dark energy forecasts for Advanced ACT. Exact numbers are available in de Bernardis poster.

    Nils Halverson (POLARBEAR): POLARBEAR rejects "no lensing B-modes" at 4.2σ. Simons Array of 3x POLARBEAR-2 forecast sensitivity \(\sigma(m_\nu)=40\meV\), \(\sigma(r=0.1)=\sigma(ns)=0.006\).

    Paolo de Bernardis poster: Advanced ACT plus BOSS ultimate sensitivity \(96\meV\) for \(ν\) mass.

    John Kováč (BICEP2): BICEP2 sees excess power at 1 degree scale in BB.
    BICEP2 + Planck + Keck Array analysis SOON. Cannot be shown yet.
    Keck Array is 2.5 times more sensitive than BICEP2. The analysis is underway. With the dataset we had back when we published, we were only able to exclude dust at 1.7 sigma. No departure of SED from simple scaling law is very good news.
    At end of Kováč's talk: BICEP2 + Planck out by end of month. Those + Keck Array 150 GHz by spring 2015. All of this + other Keck frequencies will be released by the end of 2015.
    Aurelien Fraisse (SPIDER): SPIDER 6 detector, 2 frequencies flight under way, & foreground limited, not systematics. \(r \lt 0.03\) at 3σ, low foreground.

    Al Kogut (PIPER): PIPER will be doing almost all of sky B modes at multifrequency, 8 flights get to \(r \lt 0.007\) (2σ).

    CLASS will be able to measure \(r = 0.01\), even with galactic foregrounds. Site construction underway.

    Lloyd Knox (theorist): detecting relic neutrinos is possible via gravitational effects in the CMB. The dynamics of the phase shift in acoustic peaks results from variation in \(N_{eff}\).

    Uroš Seljak (theorist): multiple deflections in the weak lensing signal is important when the convergence sensitivity gets to ~1% level. Effects not at 10% in CLκκ, more like 1%. Krause et al. is in preparation. Delensing of B-modes has a theoretical limit at 0.2 μK arcmin or \(r=2\times 10^{-5}\).

    Carlo Contaldi: little was known about the dust polarization before BICEP2 & Planck. SPIDER = 6 x BICEP2 - 30 km of atmosphere and less exposure time. Detailed modeling of the dust polarization took place. Large B field uncertainty, input was taken from starlight observations. Full 3D models for the "southern patch" including the BICEP2 window reproduce the WMAP 23 GHz channel. Small & large scales great, not interm.

    Raphael Flauger (theorist): BICEP2 BB + Planck 353 GHz give no evidence for primordial B modes. Plus, the sun sets outside.

    by Luboš Motl (noreply@blogger.com) at January 17, 2015 06:39 AM

    January 16, 2015

    Symmetrybreaking - Fermilab/SLAC

    20-ton magnet heads to New York

    A superconducting magnet begins its journey from SLAC laboratory in California to Brookhaven Lab in New York.

    Imagine an MRI magnet with a central chamber spanning some 9 feet—massive enough to accommodate a standing African elephant. Physicists at the US Department of Energy’s Brookhaven National Laboratory need just such an extraordinary piece of equipment for an upcoming experiment. And, as luck would have it, physicists at SLAC National Accelerator Laboratory happen to have one on hand.

    Instead of looking at the world’s largest land animal, this magnet takes aim at the internal structure of something much smaller: the atomic nucleus.

    Researchers at Brookhaven’s Relativistic Heavy Ion Collider (RHIC) specialize in subatomic investigations, smashing atoms and tracking the showers of fast-flying debris. RHIC scientists have been sifting through collision data nuclei for 13 years, but to go even deeper they need to upgrade their detector technology. That’s where a massive cylindrical magnet comes in.

    “The technical difficulty in manufacturing such a magnet is staggering,” says Brookhaven Lab physicist David Morrison, co-spokesperson for PHENIX, one of RHIC’s two main experiments. “The technology may be similar to an MRI—also a superconducting solenoid with a hollow center—but many times larger and completely customized. These magnets look very simple from the outside, but the internal structure contains very sophisticated engineering. You can’t just order one of these beasts from a catalogue. ”

    The proposed detector upgrade—called sPHENIX—launched the search for this elusive magnet. After assessing magnets at physics labs across the world, the PHENIX collaboration found an ideal candidate in storage across the country.

    At SLAC in California, a 40,000-pound beauty had recently finished a brilliant experimental run. This particular solenoid magnet—a thick, hollow pipe about 3.5 meters across and 3.9 meters long—once sat at the heart of a detector in SLAC’s BaBar experiment, which explored the asymmetry between matter and antimatter from 1999 to 2008.

    “We disassembled the detector and most of the parts have already gone to the scrap yard,” says Bill Wisniewski, who serves as the deputy to the SLAC Particle Physics and Astrophysics director and was closely involved with planning the move. “It’s just such a pleasure to see that there’s some hope that a major component of the detector—the solenoid—will be reused.”

    The magnet was loaded onto a truck and departed SLAC today, beginning its long and careful journey to Brookhaven’s campus in New York.

    “The particles that bind and constitute most of the visible matter in the universe remain quite mysterious,” says PHENIX co-spokesperson Jamie Nagle, a physicist at the University of Colorado. “We’ve made extraordinary strides at RHIC, but the BaBar magnet will take us even further. We’re grateful for this chance to give this one-of-a-kind equipment a second life, and I’m very excited to see how it shapes the future of nuclear physics.”

    Courtesy of: Brookhaven Lab

    The BaBar solenoid

    The BaBar magnet, a 30,865-pound solenoid housed in an 8250-pound frame, was built by the Italian company Ansaldo. Ansaldo’s superconducting magnets have found their way into many pioneering physics experiments, including the ATLAS and CMS detectors of the Large Hadron Collider. The inner ring of the BaBar magnet spans 2.8 meters with a total outer diameter of nearly 3.5 meters—nearly the width of the Statue of Liberty’s arm.

    During its run at SLAC, the BaBar experiment made many strides in fundamental physics, including contributions to the work awarded the 2008 Nobel Prize in Physics for the theory behind “charge-parity violation,” the idea that matter and antimatter behave in slightly different ways. This concept explains in part why the universe today is filled with matter and not antimatter.

    “BaBar was a seminal experiment in particle physics, and the magnet’s strength, size and uniform field proved essential to its discoveries,” says John Haggerty, the Brookhaven physicist leading the acquisition of the BaBar magnet. “It’s a remarkable piece of engineering, and it has potential beyond its original purpose.”

    In May 2013, Haggerty visited SLAC to meet with Wesley Craddock, the engineer who worked with the magnet since its installation, and Mike Racine, the technician who supervised its removal and storage. “It was immediately clear that this excellent solenoid was in very good condition and almost ready to move,” Haggerty says.

    Adds Morrison, “The BaBar magnet is larger than our initial plans called for, but using this incredible instrument will save considerable resources by repurposing existing national lab assets.”

    Brookhaven Lab was granted ownership of the BaBar solenoid in July 2013, but there was still the issue of the entire continent that sat between SLAC and the experimental hall of the PHENIX detector.

    Photo by: Andy Freeberg, SLAC National Accelerator Laboratory

    Moving the magnet

    The Department of Energy is no stranger to sharing massive magnets. In the summer of 2013, the 50-foot-wide Muon g-2 ring moved from Brookhaven Lab to Fermilab, where it will search for undiscovered particles hidden in the vacuum.

    “As you might imagine, shipping this magnet requires very careful consideration,” says Peter Wanderer, who heads Brookhaven’s Superconducting Magnet Division and worked with colleagues Michael Anerella and Paul Kovach on engineering for the big move. “You’re not only dealing with an oddly shaped and very heavy object, but also one that needs to be protected against even the slightest bit of damage. This kind of high-field, high-uniformity magnet can be surprisingly sensitive.”

    Preparations for the move required consulting with one of the solenoid’s original designers in Italy, Pasquale Fabbricatore, and designing special shipping fixtures to stabilize components of the magnet.

    After months of preparation at both SLAC and Brookhaven, the magnet—inside its custom packaging—was loaded onto a specialized truck this morning, and slowly began its journey to New York.

    “I’m sad to see it go,” Racine says. “It’s the only one like it in the world. But I’m happy to see it be reused.”

    After the magnet arrives, a team of experts will conduct mechanical, electrical, and cryogenic tests to prepare for its use in the upgrade to the sPHENIX upgrade.

    “We hope to have sPHENIX in action by 2021—including the BaBar magnet at its heart—but we have to remember that it is currently a proposal, and physics is full of surprises,” Morrison says.

    The BaBar magnet will be particularly helpful in identifying upsilons—the bound state of a very heavy bottom quark and an equally heavy anti-bottom quark. There are three closely related kinds of upsilons, each of which melts, or dissociates, at a different well-defined trillion-degree temperature. This happens in the state of matter known as quark-gluon plasma, or QGP, which was discovered at RHIC.

    “We can use these upsilons as a very precise thermometer for the QGP and understand its transition into normal matter,” Morrison says. “Something similar happened in the early universe as it began to cool microseconds after the big bang.”

     

    Like what you see? Sign up for a free subscription to symmetry!

    by Justin Eure at January 16, 2015 09:10 PM

    Symmetrybreaking - Fermilab/SLAC

    Scientists complete array on Mexican volcano

    An international team of astrophysicists has completed an advanced detector to map the most energetic phenomena in the universe.

    On Thursday, atop Volcán Sierra Negra, on a flat ledge near the highest point in Mexico, technicians filled the last of a collection of 300 cylindrical vats containing millions of gallons of ultrapure water.

    Together, the vats serve as the High-Altitude Water Cherenkov (HAWC) Gamma-Ray Observatory, a vast particle detector covering an area larger than 5 acres. Scientists are using it to catch signs of some of the highest-energy astroparticles to reach the Earth.

    The vats sit at an altitude of 4100 meters (13,500 feet) on a rocky site within view of the nearby Large Millimeter Telescope Alfonso Serrano. The area remained undeveloped until construction of the LMT, which began in 1997, brought with it the first access road, along with electricity and data lines.

    Temperatures at the top of the mountain are usually just cool enough for snow year-round, even though the atmosphere at the bottom of the mountain is warm enough to host palm trees and agave.

    “The local atmosphere is part of the detector,” says Alberto Carramiñana, general director of INAOE, the National Institute of Astrophysics, Optics and Electronics.

    Scientists at HAWC are working to understand high-energy particles that come from space. High-energy gamma rays come from extreme environments such as supernova explosions, active galactic nuclei and gamma-ray bursts. They’re also associated with high-energy cosmic rays, the origins of which are still unknown.

    When incoming gamma rays and cosmic rays from space interact with Earth’s atmosphere, they produce a cascade of particles that shower the Earth. When these high-energy secondary particles reach the vats, they shoot through the water inside faster than particles of light can, producing an optical shock wave called “Cherenkov radiation.” The boom looks like a glowing blue, violet or ultraviolet cone.

    The Pierre Auger Cosmic Ray Observatory in western Argentina, in operation since 2004, uses similar surface detector tanks to catch cosmic rays, but its focus is particles at higher energies—up to millions of giga-electronvolts. HAWC observes widely and deeply between the energy range of 100 giga-electronvolts and 100,000 giga-electronvolts.

    “HAWC is a unique water Cherenkov observatory, with no actual peer in the world,” Carramiñana says.

    Results from HAWC will complement the Fermi Gamma-ray Space Telescope, which observes at lower energy levels, as well as dozens of other tools across the electromagnetic spectrum.

    The vats at HAWC are made of corrugated steel, and each one holds a sealed, opaque bladder containing 50,000 gallons of liquid, according to Manuel Odilón de Rosas Sandoval, HAWC tank assembly coordinator. Each tank is 4 meters (13 feet) high and 7.3 meters (24 feet) in diameter and includes four light-reading photomultiplier tubes to detect the Cherenkov radiation.

    From its perch, HAWC sees the high-energy spectrum, in which particles have more energy in their motion than in their mass. The device is open to particles from about 15 percent of the sky at a time and, as the Earth rotates, is exposed to about 2/3 of the sky per day.

    Combining data from the 1200 sensors, astrophysicists can piece together the precise origins of the particle shower. With tens of thousands of events hitting the vats every second, around a terabyte of data will arrive per day. The device will record half a trillion events per year.

    The observatory, which was proposed in 2006 and began construction in 2012, is scheduled to operate for 10 years. “I look forward to the operational lifetime of HAWC,” Carramiñana says. “We are not sure what we will find.”

    More than 100 researchers from 30 partner organizations in Mexico and the United States collaborate on HAWC, with two additional associated scientists in Poland and Costa Rica. Prominent American partners include the University of Maryland, NASA’s Goddard Space Flight Center and Los Alamos National Laboratory. Funding comes from the Department of Energy, the National Science Foundation and Mexico’s National Council of Science and Technology.

    Like what you see? Sign up for a free subscription to symmetry!

    by Eagle Gamma at January 16, 2015 02:00 PM

    Quantum Diaries

    Will Self’s CERN

    “It doesn’t look to me like the rose window of Notre Dame. It looks like a filthy big machine down a hole.” — Will Self

    Like any documentary, biography, or other educational program on the radio, Will Self’s five-part radio program Self Orbits CERN is partially a work of fiction. It is based, to be sure, on a real walk through the French countryside along the route of the Large Hadron Collider, on the quest for a promised “sense of wonder”. And it is based on real tours at CERN and real conversations. But editorial and narrative choices have to be made in producing a radio program, and in that sense it is exactly the story that Will Self wants to tell. He is, after all, a storyteller.

    It is a story of a vast scientific bureaucracy that promises “to steal fire from the gods” through an over-polished public relations team, with day-to-day work done by narrow, technically-minded savants who dodge the big philosophical questions suggested by their work. It is a story of big ugly new machines whose function is incomprehensible. It is the story of a walk through thunderstorms and countryside punctuated by awkward meetings with a cast of characters who are always asked the same questions, and apparently never give a satisfactory answer.

    Self’s CERN is not the CERN I recognize, but I can recognize the elements of his visit and how he might have put them together that way. Yes, CERN has secretariats and human resources and procurement, all the boring things that any big employer that builds on a vast scale has to have. And yes, many people working at CERN are specialists in the technical problems that define their jobs. Some of us are interested in the wider philosophical questions implied by trying to understand what the universe is made of and how it works, but some of us are simply really excited about the challenges of a tiny part of the overall project.

    “I think you understand more than you let on.”Professor Akram Khan

    The central conflict of the program feels a bit like it was engineered by Self, or at least made inevitable by his deliberately-cultivated ignorance. Why, for example, does he wait until halfway through the walk to ask for the basic overview of particle physics that he feels he’s missing, unless it adds to the drama he wants to create? By the end of the program, he admits that asking for explanations when he hasn’t learned much background is a bit unfair. But the trouble is not whether he knows the mathematics. The trouble, rather, is that he’s listened to a typical, very short summary of why we care about particle physics, and taken it literally. He has decided in advance that CERN is a quasi-religious entity that’s somehow prepared to answer big philosophical questions, and never quite reconsiders the discussion based on what’s actually on offer.

    If his point is that particle physicists who speak to the public are sometimes careless, he’s absolutely right. We might say we are looking for how or why the universe was created, when really we mean we are learning what it’s made of and the rules for how that stuff interacts, which in turn lets us trace what happened in the past almost (but not quite) back to the moment of the Big Bang. When we say we’re replicating the conditions at that moment, we mean we’re creating particles so massive that they require the energy density that was present back then. We might say that the Higgs boson explains mass, when more precisely it’s part of the model that gives a mechanism for mass to exist in models whose symmetries forbid it. Usually a visit to CERN involves several different explanations from different people, from the high-level and media-savvy down to the technical details of particular systems. Most science journalists would put this information together to present the perspective they wanted, but Self apparently takes everything at face value, and asks everyone he meets for the big picture connections. His narrative is edited to literally cut off technical explanations, because he wants to hear about beauty and philosophy.

    Will Self wants the people searching for facts about the universe to also interpret them in the broadest sense, but this is much harder than he implies. As part of a meeting of the UK CMS Collaboration at the University of Bristol last week, I had the opportunity to attend a seminar by Professor James Ladyman, who discussed the philosophy of science and the relationship of working scientists to it. One of the major points he drove home was just how specialized the philosophy of science can be: that the tremendous existing body of work on, for example, interpreting Quantum Mechanics requires years of research and thought which is distinct from learning to do calculations. Very few people have had time to learn both, and their work is important, but great scientific or great philosophical work is usually done by people who have specialized in only one or the other. In fact, we usually specialize a great deal more, into specific kinds of quantum mechanical interactions (e.g. LHC collisions) and specific ways of studying them (particular detectors and interactions).

    Toward the end of the final episode, Self finds himself at Voltaire’s chateau near Ferney, France. Here, at last, is what he is looking for: a place where a polymath mused in beautiful surroundings on both philosophy and the natural world. Why have we lost that holistic approach to science? It turns out there are two very good reasons. First, we know an awful lot more than Voltaire did, which requires tremendous specialization discussed above. But second, science and philosophy are no longer the monopoly of rich European men with leisure time. It’s easy to do a bit of everything when you have very few peers and no obligation to complete any specific task. Scientists now have jobs that give them specific roles, working together as a part of a much wider task, in the case of CERN a literally global project. I might dabble in philosophy as an individual, but I recognize that my expertise is limited, and I really enjoy collaborating with my colleagues to cover together all the details we need to learn about the universe.

    In Self’s world, physicists should be able to explain their work to writers, artists, and philosophers, and I agree: we should be able to explain it to everyone. But he — or at least, the character he plays in his own story — goes further, implying that scientific work whose goals and methods have not been explained well, or that cannot be recast in aesthetic and moral terms, is intrinsically suspect and potentially valueless. This is a false dichotomy: it’s perfectly possible, even likely, to have important research that is often explained poorly! Ultimately, Self Orbits CERN asks the right questions, but it is too busy musing about what the answers should be to pay attention to what they really are.

    For all that, I recommend listening to the five 15-minute episodes. The music is lovely, the story engaging, and the description of the French countryside invigorating. The jokes were great, according to Miranda Sawyer (and you should probably trust her sense of humour rather than the woefully miscalibrated sense of humor that I brought from America). If you agree with me that Self has gone wrong in how he asks questions about science and which answers he expects, well, perhaps you will find some answers or new ideas for yourself.

    by Seth Zenz at January 16, 2015 01:48 PM

    Jon Butterworth - Life and Physics

    A follow up on research impact and the REF

    Anyone connected with UK academia, who follows news about it, or indeed who has met a UK academic socially over the last couple of years, will probably have heard about the Research Excellence Framework (REF). All UK universities had their research assessed in a long-drawn-out process which will influence how billions of pounds of research funding are distributed. Similar excercises go on every six or so years.

    The results are not a one-dimensional league table, which is good; so everyone has their favourite way of combining them to make their own league table, which is entertaining. My favourite is “research intensity” (see below, from the THE):

    ref

    A new element in the REF this time was the inclusion of some assessment of “Impact”. This (like the REF itself) is far from universally popular. Personally I’m relatively supportive of this element in principle though, as I wrote here. Essentially, while I don’t think all academic research should be driven by predictions of its impact beyond academia, I do think that it should be part of the mix. The research activity of any major physics department should, even serendipitously, have some impact outside of the academic discipline (as well as lots in it), and it is worth collecting and assessing some evidence for this. Your mileage in other subjects may vary.

    I also considered whether my Guardian blog might constitute a form of impact-beyond-academia for the discovery of the Higgs boson and the other work of the Large Hadron Collider, and I even asked readers for evidence and help (thanks!). In the end we did submit a “case study” on this. There is a summary of the case that was submitted here. The studies generally have more hard evidence than is given in that précis, but you get the idea.

    Similar summaries of all UCL’s impact case studies are given here. Enjoy…


    Filed under: Physics, Politics, Science, Science Policy, Writing Tagged: Guardian, Higgs, LHC, UCL

    by Jon Butterworth at January 16, 2015 08:30 AM

    January 15, 2015

    Andrew Jaffe - Leaves on the Line

    Oscillators, Integrals, and Bugs

    [Update: The bug seems fixed in the latest version, 10.0.2.]

    I am in my third year teaching a course in Quantum Mechanics, and we spend a lot of time working with a very simple system known as the harmonic oscillator — the physics of a pendulum, or a spring. In fact, the simple harmonic oscillator (SHO) is ubiquitous in almost all of physics, because we can often represent the behaviour of some system as approximately the motion of an SHO, with some corrections that we can calculate using a technique called perturbation theory.

    It turns out that in order to describe the state of a quantum SHO, we need to work with the Gaussian function, essentially the combination exp(-y²/2), multiplied by another set of functions called Hermite polynomials. These latter functions are just, as the name says, polynomials, which means that they are just sums of terms like ayⁿ where a is some constant and n is 0, 1, 2, 3, … Now, one of the properties of the Gaussian function is that it dives to zero really fast as y gets far from zero, so fast that multiplying by any polynomial still goes to zero quickly. This, in turn, means that we can integrate polynomials, or the product of polynomials (which are just other, more complicated polynomials) multiplied by our Gaussian, and get nice (not infinite) answers.

    Unfortunately, Wolfram Inc.’s Mathematica (the most recent version 10.0.1) disagrees:

    MathematicaGaussHermiteBug

    The details depend on exactly which Hermite polynomials I pick — 7 and 16 fail, as shown, but some combinations give the correct answer, which is in fact zero unless the two numbers differ by just one. In fact, if you force Mathematica to split the calculation into separate integrals for each term, and add them up at the end, you get the right answer.

    I’ve tried to report this to Wolfram, but haven’t heard back yet. Has anyone else experienced this?

    by Andrew at January 15, 2015 04:40 PM

    Quantum Diaries

    The Theory of Everything

    Last night I went to see The Theory of Everything, the biographical film about Stephen Hawking, adapted from the memoir of his ex-wife, Jane Wilde Hawking. News literally just in – it has been nominated for the Best Picture and Adapted Screenplay Oscars, and there are Best Actor and Best Actress nominations for Eddie Redmayne (Stephen) and Felicity Jones (Jane). Arguably today’s most famous scientist, Stephen Hawking is a theoretical physicist and cosmologist, now holding the position of Director of Research at Cambridge’s Centre for Theoretical Cosmology. He suffers from motor neurone disease; a degenerative disease that has left him unable to move most of the muscles in his body. He now communicates by selecting letters and words on a computer screen using one muscle in his cheek. His computerised voice is world famous and instantly recognisable. He is responsible for ground-breaking work on black holes and general relativity.

    Theory_of_Everything

    I thought the film was fantastically made and the acting incredible; Redmayne’s portrayal of Hawking’s physical condition was uncanny. I shed a few tears at the plight of this man surviving against all the odds whilst doing incredible theoretical physics, and his wife, ever patient and loving, taking care of him and bearing his children despite his health getting only worse.
    There have been some complaints about the lack of focus in the film on Hawking’s scientific work; the film instead focuses mainly on his relationship with Jane and their struggle as his condition deteriorates. This should not be a surprise when the film was adapted from Jane’s own writing. If you want to know more about Hawking’s work in physics, then I strongly recommend his physics books. I first attempted to read A Brief History of Time, his most famous publication, age 11. This was obviously optimistic of me, and I gave up after the first couple of chapters. I tried again during my A-levels but never got round to finishing it, but having now studied cosmology and general relativity in much more detail I fully intend to give it another try! I have however read The Universe in a Nutshell, a more accessible book on the history of modern physics and cosmology, as well as discussions on that holy grail of physics, and the title of the film, a ‘theory of everything’.

    But what is a theory of everything? Also known as a ‘final theory’, an ‘ultimate theory’, and a ‘master theory’, it sounds rather grand. A ToE would elegantly explain our universe, maybe even in just one equation, linking all the aspects that we can not currently reconcile with each other. It would allow a deep understanding of the universe we live in, as Hawking himself professed despite being an atheist:

    If we do discover a complete theory, it should in time be understandable in broad principle by everyone, not just a few scientists. Then we shall all, philosophers, scientists, and just ordinary people, be able to take part in the discussion of the question of why it is that we and the universe exist. If we find the answer to that, it would be the ultimate triumph of human reason — for then we would know the mind of God.

    Sounds good, right? The ultimate triumph. Unfortunately, so far, attempts at developing a ToE have not delivered. Why not? First we need to understand a little more about the physics we know and understand.
    Our universe has four forces governing everything that happens within it:

    • Electromagnetism – the interaction of photons and charged particles that we are familiar with in electricity, magnets, etc.
    • Weak force – the interaction responsible for radioactive decay.
    • Strong force – the interaction that binds together the protons and neutrons in a nucleus
    • Gravity – the attraction of bodies with mass to each other, the reason we don’t fly away from the Earth and why the Earth orbits the sun (and also why we know about dark matter!)

    Why four? No one knows. It has been shown that at electromagnetism and the weak force can be combined into an ‘electroweak’ force at high energies. This means that in our everyday low energy universe (as opposed to the hot dense universe shortly after the big bang) that electromagnetism and the weak force are just two faces of the same force.

    If electromagnetism and the weak force can be combined, can we do the same with the strong force and gravity? Combining the electroweak and the strong force results in a “GUT” – a Grand Unified Theory, (NB despite being grand, this does not yet include gravity). The energy required to see the joining of the strong and the electroweak would be beyond the levels we could reach with particle colliders. We do not currently have a generally accepted GUT, but there are lots of complicated theories in the works.
    The final step to a ToE would be the joining of gravity with a GUT theory. This is the real sticking point. As Jane illustrates with a pea and a potato over dinner in the film, the unification of quantum field theory (the pea) on the tiny scales with general relativity (the potato) on large scales has so far proven undoable.
    Quantum field theory is what we particle physicists deal with, the standard model of particle physics, tiny things like photons and quarks and electrons, all interacting via electromagnetism, the weak force and the strong force. General relativity is far in the other direction; stars, galaxies, galaxy clusters. Big things with lots of mass, causing curvatures in space-time that manifests as gravity. Both quantum field theory and general relativity have been tested to extreme precision – they both work perfectly on their relative scales. So where does the problem in joining them lie?

    Hawking’s greatest work is on black holes; the infinitely small and dense aftermath of the collapse of an enormous star. Once a star greater than about 23 solar masses runs out of fuel to produce energy, its core collapses under its own weight, expelling its outer layers in an explosion called a supernova that outshines its own galaxy. If the core is big enough, it will continue collapsing until it becomes a ‘space-time singularity’ – a point in space infinitely small and dense, where not even light can escape.
    When we try to understand the physics inside that point, we start encountering problems. We need both quantum field theory and general relativity – we have a tiny tiny space but a huge mass, and infinities start popping up all over the place. The maths just doesn’t work.

    The evolution of stars, showing how a sufficiently large star can end its life as a black hole

    The evolution of stars, showing how a sufficiently large star can end its life as a black hole

    Stephen Hawking, with the computerised speech system that has allowed him to communicate after losing his ability to speak

    Stephen Hawking, with the computerised speech system that has allowed him to communicate and continue his physics work after losing his ability to speak

    Hawking has dedicated much of his life to trying to unify these two pillars of modern physics, so far with no luck. This begs the question, if his incredible mind cannot do it, what hope do we have? Currently, a popular approach is string theory – the theory that everything is made of tiny strings, vibrating in many (up to 26!) dimensions. This may sound silly, but it’s actually quite elegant – each different particle is made of a string vibrating in a different mode. An issue with string theory currently is it offers no easily testable predictions. Some of the best minds of today are working on this, so there is still hope!

    Stephen Hawking is clearly an incredible man. He has a level of intelligence and a talent in mathematics and physics most of us physicists can only dream of. However, I believe Jane also deserves a huge amount of credit. The diagnosis of motor neurone disease came only shortly after they began dating, but she embarked on a life with him, marrying him and having his children, taking on the mammoth task of caring for him mostly alone, despite his prognosis of only 2 years to live.

    Of course, Hawking has far exceeded those two years. He is now 73, reaching what is basically a normal life expectancy despite having a disease that has an average survival from onset of only 3-4 years. He was diagnosed aged only 21. Diseases such as his are tragic, leaving a person’s mind totally intact but trapped inside a failing body. Many would just give up, but Hawking’s love for both Jane and physics drove him to persevere and become the esteemed professor he is today.

    I strongly recommend watching The Theory of Everything, even to those uninterested in cosmology. It’s a beautiful, romantic drama set in picturesque Cambridge, emotionally powerful and moving, and certainly does not require you to understand the physics!

    by Sally Shaw at January 15, 2015 12:54 PM

    January 14, 2015

    ATLAS Experiment

    The Ties That Bind

    A few weeks ago, I found myself in one of the most beautiful places on earth: wedged between a metallic cable tray and a row of dusty cooling pipes at the bottom of Sector 13 of the ATLAS Detector at CERN. My wrists were scratched from hard plastic cable ties, I had an industrial vacuum strapped to my back, and my only light came from a battery powered LED fastened to the front of my helmet. It was beautiful.

    Cleaning the ATLAS detector

    Beneath the ATLAS detector – note the well-placed cable ties. IMAGE: Claudia Marcelloni, ATLAS Experiment © 2014 CERN.

    The ATLAS Detector is one of the largest, most complex scientific instruments ever constructed. It is 46 meters long, 26 meters high, and sits 80 metres underground, completely surrounding one of four points on the Large Hadron Collider (LHC), where proton beams are brought together to collide at high energies.  It is designed to capture remnants of the collisions, which appear in the form of particle tracks and energy deposits in its active components. Information from these remnants allows us to reconstruct properties of the collisions and, in doing so, to improve our understanding of the basic building blocks and forces of nature.

    On that particular day, a few dozen of my colleagues and I were weaving our way through the detector, removing dirt and stray objects that had accumulated during the previous two years. The LHC had been shut down during that time, in order to upgrade the accelerator and prepare its detectors for proton collisions at higher energy. ATLAS is constructed around a set of very large, powerful magnets, designed to curve charged particles coming from the collisions, allowing us to precisely measure their momenta. Any metallic objects left in the detector risk turning into fast-moving projectiles when the magnets are powered up, so it was important for us to do a good job.

    ATLAS Big Wheel

    ATLAS is divided into 16 phi sectors with #13 at the bottom. IMAGE: Steven Goldfarb, ATLAS Experiment © 2014 CERN

    The significance of the task, however, did not prevent my eyes from taking in the wonder of the beauty around me. ATLAS is shaped somewhat like a large barrel. For reference in construction, software, and physics analysis, we divide the angle around the beam axis, phi, into 16 sectors. Sector 13 is the lucky sector at the very bottom of the detector, which is where I found myself that morning. And I was right at ground zero, directly under the point of collision.

    To get to that spot, I had to pass through a myriad of detector hardware, electronics, cables, and cooling pipes. One of the most striking aspects of the scenery is the ironic juxtaposition of construction-grade machinery, including built-in ladders and scaffolding, with delicate, highly sensitive detector components, some of which make positional measurements to micron (thousandth of a millimetre) precision. All of this is held in place by kilometres of cable trays, fixings, and what appear to be millions of plastic (sometimes sharp) cable ties.

    Inside the ATLAS detector

    Scaffolding and ladder mounted inside the precision muon spectrometer. IMAGE: Steven Goldfarb, ATLAS Experiment © 2014 CERN.

    The real beauty lies not in the parts themselves, but rather in the magnificent stories of international cooperation and collaboration that they tell. The cable tie that scratched my wrist secures a cable that was installed by an Iranian student from a Canadian university. Its purpose is to carry data from electronics designed in Germany, attached to a detector built in the USA and installed by a Russian technician.  On the other end, a Japanese readout system brings the data to a trigger designed in Australia, following the plans of a Moroccan scientist. The filtered data is processed by software written in Sweden following the plans of a French physicist at a Dutch laboratory, and then distributed by grid middleware designed by a Brazilian student at CERN. This allows the data to be analyzed by a Chinese physicist in Argentina working in a group chaired by an Israeli researcher and overseen by a British coordinator.  And what about the cable tie?  No idea, but that doesn’t take away from its beauty.

    There are 178 institutions from 38 different countries participating in the ATLAS Experiment, which is only the beginning.  When one considers the international make-up of each of the institutions, it would be safe to claim that well over 100 countries from all corners of the globe are represented in the collaboration.  While this rich diversity is a wonderful story, the real beauty lies in the commonality.

    All of the scientists, with their diverse social, cultural and linguistic backgrounds, share a common goal: a commitment to the success of the experiment. The plastic cable tie might scratch, but it is tight and well placed; its cable is held correctly and the data are delivered, as expected. This enormous, complex enterprise works because the researchers who built it are driven by the essential nature of the mission: to improve our understanding of the world we live in. We share a common dedication to the future, we know it depends on research like this, and we are thrilled to be a part of it.

    ATLAS Collaboration

    ATLAS Collaboration members in discussion. What discoveries are in store this year?  IMAGE: Claudia Marcelloni, ATLAS Experiment © 2008 CERN.

    This spring, the LHC will restart at an energy level higher than any accelerator has ever achieved before. This will allow the researchers from ATLAS, as well as the thousands of other physicists from partner experiments sharing the accelerator, to explore the fundamental components of our universe in more detail than ever before. These scientists share a common dream of discovery that will manifest itself in the excitement of the coming months. Whether or not that discovery comes this year or some time in the future, Sector 13 of the ATLAS detector reflects all the beauty of that dream.


    Steven Goldfarb Steven Goldfarb is a physicist from the University of Michigan working on the ATLAS Experiment at CERN. He currently serves as the Outreach & Education Coordinator, a member of the ATLAS Muon Project, and an active host for ATLAS Virtual Visits. Send a note to info@atlas-live.ch and he will happily host a visit from your school.

    by Steve at January 14, 2015 05:10 PM

    ZapperZ - Physics and Physicists

    Superstrings For Dummies
    Here's another educational video by Don Lincoln out of Fermilab. This time, it is on the basic idea (and the emphasis here is on BASIC) of String/Superstrings.



    Zz.

    by ZapperZ (noreply@blogger.com) at January 14, 2015 05:05 PM

    Jon Butterworth - Life and Physics

    Prepare yourself for the restart

    As the preparations for the higher-energy restart of the LHC continue, good to to see this Horizon “Hunt for the Higgs” (with Jim Al-Khalili, Jim Gates, Adam Davison, me, and others) is available again on BBC iPlayer for a while. I recommend it as good preparation/revision. As is Smashing Physics of course. Neither BBC iPlayer nor the book are available in the US or Canada sadly, but  don’t despair, the book is out on 27th Jan as Most Wanted Particle (see for example here).


    Filed under: Particle Physics, Physics, Science Tagged: BBC, books, cern, LHC, Smashing Physics, video

    by Jon Butterworth at January 14, 2015 09:20 AM

    January 13, 2015

    Lubos Motl - string vacua and pheno

    A model that agrees with tau-mu Higgs decays and 2 other anomalies
    ...and its incomplete divine stringy incarnation...

    I originally missed a hep-ph preprint almost a week ago,
    Explaining \(h\to \mu^\pm \tau^\mp\), \(B\to K^*\mu^+\mu^-\), and \(B\to K\mu^+\mu^-/B\to Ke^+e^−\) in a two-Higgs-doublet model with gauged \(L_\mu−L_\tau\)
    by Crivellin, D'Ambrosio, and Heeck, probably because it had such a repulsively boring title. By the way, do you agree with the hype saying that the new Mathjax 2.5 beta is loading 30-40 percent faster than Mathjax 2.4 that was used on this blog up to yesterday morning?

    The title of the preprint is uninspiring even though it contains all the good stuff. Less is sometimes more. At any rate, CMS recently reported a 2.4-sigma excess in the search for the decays of the Higgs boson\[

    h\to \mu^\pm \tau^\mp

    \] which is flavor-violating. A muon plus an antitau; or an antimuon plus a tau. Bizarre. The 2.4-sigma excess corresponds to the claim that about 1% of the Higgs bosons decay in this weird way! Correct me if I am wrong but I think that this excess has only been discussed in the comment section of this blog but I was very excited about it in July.




    Aside from this flavor-violating hint, the LHCb experiment has reported several anomalies and the two most famous ones may be explained by the model promoted by this paper. One of them was discussed on TRF repeatedly:

    The \(B\)-mesons may decay to \(K\)-mesons plus a charged lepton pair, \(\ell^+\ell^-\), and the processes with \(\ell=e\) and \(\ell=\mu\) should be almost equally frequent according to the Standard Model but LHCb seems to see a difference between the electron-producing and muon-producing processes. The significance of the signal is 2.6 sigma.




    The final, third deviation is seen by LHCb, too. The rate of the \(B\) decay to an off-shell \(K^*\) along with the muon-antimuon pair, \(\mu^+\mu^-\), seems to deviate from the Standard Model by 2-3 sigma, too.

    Each of these three anomalies is significant approximately at the 2.5-sigma level and they seem to have something in common. The second generation – muons – is treated a bit differently. It doesn't seem to be just another copy of the first generation (or the third generation).

    The model by the CERN-Naples-Brussels team claims to be compatible with all these three anomalies. Within this model, the three anomalies are no longer independent from each other – which may strengthen your belief that they are not just flukes that will go away.

    If you were willing to oversimplify just a little bit, you could argue that these three anomalies are showing "almost the same thing" so you may add these excesses in the Pythagorean way. And \(\sqrt{3}\times 2.5 \approx 4.3\). With this optimistic interpretation, we may be approaching a 5-sigma excess. ;-)

    These three physicists construct a model. It is a two-Higgs-doublet model (2HDM). The number of Higgs doublets is doubled relatively to the Standard Model – to yield the spectrum we know from minimal SUSY. But 2HDM is meant to be a more general model of the Higgs sector, a model ignoring the constraints on the parameters that are implied by supersymmetry. (But it is also a more special model because it ignores or decouples all the other superpartners.)

    And there's one special new feature that they need before they explain the anomalies. Normally, the lepton number \(L\) – and especially the three generation-specific lepton numbers \(L_e,L_\mu,L_\tau\) – are (approximate?) global symmetries. But these three folks promote one particular combination, namely the difference \(L_\mu-L_\tau\), to a gauge symmetry – one that is spontaneously broken by a scalar field.

    This gauging of the symmetry adds a new spin-one boson, \(Z'\), which has some mass, and right-handed neutrinos acquire some Majorana masses because of that, too. These new elementary particles and interactions also influence the processes such as the decays of the Higgs bosons and \(B\)-mesons – those we encountered in the anomalies.

    What I find particularly attractive is that the gauging of \(L_\mu-L_\tau\) may support an old crazy \(E_8\) idea of mine. It is a well-known fact that the adjoint (in this case also fundamental) representation \({\bf 248}\) of the exceptional Lie group \(E_8\) decomposes under the maximal \(E_6\times SU(3)\) subgroup as\[

    {\bf 248} = ({\bf 78},{\bf 1}) + ({\bf 1},{\bf 8}) + ({\bf 27},{\bf 3}) + ({\bf \bar{27}},{\bf \bar 3})

    \] It is the direct sum of the adjoint representations of the subgroup's factors; and of the tensor product of the fundamental representations (plus the complex conjugate representation: note that \(E_6\) is the only simple exceptional Lie group that has complex representations).

    If you use \(E_6\) or its subgroup as a grand unified group, the representation \({\bf 27}\) produces one generation of quarks and leptons. It works but what is very cool is that the decomposition of the representation of \(E_8\) seems to automatically produce three copies of the representation \({\bf 27}\).

    It almost looks like if the \(E_8\) group were predicting three generations. The three generations may be complex-rotated by the \(SU(3)_g\) group, the centralizer of the grand unified group \(E_6\) within the \(E_8\) group. Isn't it cool? I added the \(g\) subscript for "generational".

    A problem with this cute story is that the most natural stringy reincarnation of this \(E_8\) picture, the \(E_8\times E_8\) heterotic string theory (or its strongly coupled limit, the Hořava-Witten heterotic M-theory) doesn't normally support this way of counting the generations. Recall that in 1985, this became the first realistic embedding of the Standard Model (and SUSY and grand unification, not to mention gravity) within string theory. But the number of generations is usually written as \(N_g=|\chi|/2\), one-half of the Euler characteristic of the Calabi-Yau manifold. The latter constant may be anything. All traces of the special role of \(3\) are eliminated, and so on. A related defect is that the rest of the \(E_8\) group outside \(E_6\) is broken "by the compactification" which is a "stringy effect" so no four-dimensional effective field theory description ever sees the other \(E_8\) gauge bosons – except for the GUT \(E_6\) gauge bosons.

    But from a different perspective, there could still be something special about the three generations – due to some effective, approximate, or local restoration of the whole \(E_8\) symmetry. The simplest heterotic compactifications identify the field strength in the \(SU(3)_E\) part of the gauge group – a subgroup of \(E_8\) – with the field strength in the gravitational \(SU(3)_{CY}\) holonomy – this \(SU(3)_{CY}\) is a subgroup of \(SO(6)\) rotating the six Calabi-Yau dimensions.

    The grand unified group is only an \(E_6\) or smaller because it's the centralizer of \(SU(3)_g\) within \(E_8\). And I had to take the centralizer of \(SU(3)_g\) because that's the components of the field strength that break the gauge group in \(d=10\) spacetime dimensions. Perhaps, we should think that this field strength – or some of its components – are "small" in magnitude, so that one generator of this \(SU(3)_g\), and \(L_\mu-L_\tau\) is indeed one generator of \(SU(3)_g\) if \((e,\mu,\tau)\) are interpreted as the fundamental triplet of \(SU(3)_g\), is "much less broken" than others.

    If the relevant component of the field strength may be considered "small" in this sense, it could be possible to organize the fermionic spectrum into the part of the \(E_8\) multiplet. And one should find some field-theoretical \(Z'\) boson responsible for the spontaneous breaking of this generator of the generational \(SU(3)_g\).

    As you can see, if the heterotic models may be formulated in a slightly special, unorthodox, outside-the-box way (and yes, it's a somewhat big "if"), one may have a natural stringy model that achieves "more than grand" unification, explains why there are three generations of fermions, and accounts for three so far weak anomalies observed by CMS and LHCb (which will dramatically strengthen in a few months if they are real).

    Hat tip: Tommaso Dorigo

    by Luboš Motl (noreply@blogger.com) at January 13, 2015 06:49 PM

    Symmetrybreaking - Fermilab/SLAC

    Dark horse of the dark matter hunt

    Dark matter might be made up of a type of particle not many scientists are looking for: the axion.

    The ADMX experiment seems to be an exercise in contradictions.

    Dark matter, the substance making up 85 percent of all the mass in the universe, is invisible. The goal of ADMX is to detect it by turning it into photons, particles of light. Dark matter was forged in the early universe, under conditions of extreme heat. ADMX, on the other hand, operates in extreme cold. Dark matter comprises most of the mass of a galaxy. To find it, ADMX will use sophisticated devices microscopic in size.

    Scientists on ADMX—short for the Axion Dark Matter eXperiment—are searching for hypothetical particles called axions. The axion is a dark matter candidate that is also a bit of a dark horse, even as this esoteric branch of physics goes.

    Unlike most dark matter candidate possibilities, axions are very low in mass and interact very weakly with particles of ordinary matter and so are difficult to detect. However, according to theory, axions can turn into photons, which are much more interactive and easier to detect.

    In July 2014, the US Department of Energy picked three dark matter experiments as most promising for continued support, including ADMX. The other two—the Large Underground Xenon (LUX) detector and the Cryogenic Dark Matter Search (CDMS)—are both designed to hunt for another dark matter candidate, weakly interacting massive particles, or WIMPs.

    With the upgrade funded by the Department of Energy, the ADMX team has added a liquid helium-cooled refrigerator to chill its sensitive detectors, known as superconducting quantum interference devices (SQUIDs). The ADMX experiment uses its powerful magnetic field to turn dark matter axions into microwave photons, which a SQUID can detect when operating at a specific frequency corresponding to the mass that of the axion.

    Axions may be as puny as one trillionth of the mass of an electron. Compare that to WIMPs, which are predicted to be hundreds of thousands of times more massive than electrons, making them heavier than protons and neutrons.

    The other two DOE-boosted experiments, CDMS and LUX, have plenty of competition around the world in their search for WIMPs. But ADMX stands nearly alone as a large-scale hunter for axions. Leslie Rosenberg, University of Washington physicist and a leader of the ADMX project, sees this as a call to work quickly before others catch up. “People are getting nervous about WIMP dark matter,” he says. So the pressure is on to “do a definitive experiment, and either detect this [axion] or reject the hypothesis.”

    The answer to a problem

    Axions are hypothetical particles proposed in the late 1970s, originally to fix a problem entirely unrelated to dark matter.

    As physicists developed the theory of the strong nuclear force, which binds quarks together inside protons and neutrons, they noticed something wrong. Interactions inside neutrons should have made them electrically asymmetrical, so that they would flip when subjected to an electric field. However, experiments show no such thing, so something must have been missing in the theory.

    “If you could just impose the symmetry, maybe that would be an answer, but you cannot,” says retired Stanford University physicist Helen Quinn. Instead, in 1977 she and Roberto Peccei, who was also at Stanford at that time, proposed a simple modification to the mathematics describing the strong force. The Peccei-Quinn model, as it is now known, both removed the neutron asymmetry and instead predicted a new particle: the axion.

    Axions are appealing from a conceptual point of view, Rosenberg says. “I learned about axions when I was a graduate student, and it really hit a resonance with me then. Stuff that wasn't making sense suddenly made sense because of the axion.”

    A dark matter candidate

    Unlike the Higgs boson, axions lie outside the Standard Model of particle physics and are not governed by the same forces. If they exist, axions are transparent to light, don’t interact directly with ordinary matter except in very tenuous ways, and could have been produced in sufficient amounts in the early universe to make up the 85 percent of mass we call dark matter.

    “Provided axions exist, they're almost certain to be some fraction of dark matter,” says Oxford University theoretical physicist Joseph Conlon.

    “Axions are an explanation that fits in with everything we know about physics and all the ideas of how you might extend physics,” he says. “I think axions are one particle that almost all particle theorists would probably bet rather large amounts of money on that they do exist, even if they are very, very hard to detect.”

    Even if, like Conlon, we’re willing to wager that axions exist, it’s another matter to say they exist in such quantities and at the proper mass range to show up in our detectors.

    Rosenberg trusts that ADMX will work, and after that, it’s up to nature to reveal its hand: “What I can say is we'll likely have an experiment that at least over a broad mass range will either detect this axion or reject the hypothesis at high confidence.”

    Finding any axion detection would be a vindication of the theory developed by Quinn, Peccei and others. Finding many axions could finally solve the dark matter problem and would make this dark horse particle a champion.

    Artwork by: Sandbox Studio, Chicago

     

    Like what you see? Sign up for a free subscription to symmetry!

    by Matthew R. Francis at January 13, 2015 04:18 PM

    Matt Strassler - Of Particular Significance

    Giving Public Talk Jan. 20th in Cambridge, MA

    Hope all of you had a good holiday and a good start to the New Year!

    I myself continue to be extraordinarily busy as we move into 2015, but I am glad to say that some of that activity involves communicating science to the public.  In fact, a week from today I will be giving a public talk — really a short talk and a longer question/answer period — in Cambridge, just outside of Boston and not far from MIT. This event is a part of the monthly “CafeSci” series, which is affiliated with the famous NOVA science television programs produced for decades by public TV/Radio station WGBH in Boston.

    Note for those of you have gone before to CafeSci events: it will be in a new venue, not far from Kendall Square. Here’s the announcement:

    Tuesday, January 20th at 7pm (about 1 hour long)
    Le Laboratoire Cambridge (NEW LOCATION)
    http://www.lelaboratoirecambridge.com/
    650 East Kendall St, Cambridge, MA

    “The Large Hadron Collider Restarts Soon! What Lies Ahead?”

    Speaker: Matthew Strassler

    “After a long nap, the Large Hadron Collider [LHC], where the Higgs particle was discovered in 2012, will begin operating again in 2015, with more powerful collisions than before. Now that we know Higgs particles exist, what do we want to know about them? What methods can we use to answer our questions? And what is the most important puzzle that we are hoping the LHC will help us solve?”

    Public Transit: Red line to Kendall Square, walk straight down 3rd Street, turn right onto Athenaeum Street, and left onto East Kendall

    Parking: There is a parking deck – the 650 East Kendall Street Garage – accessible by Linskey Way.


    Filed under: Higgs, LHC News, Public Outreach Tagged: Higgs, LHC, PublicTalks

    by Matt Strassler at January 13, 2015 04:02 PM

    Tommaso Dorigo - Scientificblogging

    Lepton-Flavor-Violating Higgs Decays Fit In With LHCb Anomalies
    The CMS Collaboration at the LHC collider has recently measured a non-negligible rate for the fraction of Higgs boson decays into muon-tau pairs, as I reported in this article last summer. The observation is not statistically significant enough to cause an earthquake in the world of high-energy physics, and sceptics like myself just raised a gram of eyebrows at the announcement - oh yeah, just another 2-sigma effect. However, the matter becomes more interesting if there is a theoretical model which allows for the observed effect, AND if the model is not entirely crazy.

    read more

    by Tommaso Dorigo at January 13, 2015 02:02 PM

    Jester - Resonaances

    Do-or-die year
    The year 2015 began as any other year... I mean the hangover situation in particle physics. We have a theory of fundamental interactions - the Standard Model - that we know is certainly not the final  theory because it cannot account for dark matter, matter-antimatter asymmetry, and cosmic inflation. At the same time, the Standard Model perfectly describes any experiment we have performed here on Earth (up to a few outliers that can be explained as statistical fluctuations).  This is puzzling, because some these experiments are in principle sensitive to very heavy particles, sometimes well beyond the reach of the LHC or any future colliders. Theorists cannot offer much help at this point. Until recently,  naturalness was the guiding principle in constructing new theories, but  few have retained confidence in it. No other serious paradigm has appeared to replace naturalness. In short, we know for sure there is new physics beyond the Standard Model, but have absolutely no clue what it is and how big energy is needed to access it.

    Yet 2015 is different because it is the year when LHC restarts at 13 TeV energy.  We should expect high-energy collisions some time in summer, and around 10 inverse femtobarns of data by the end of the year. This is the last significant energy jump most of us may experience before retirement, therefore this year is going to be absolutely crucial for the future of particle physics. If, by next Christmas, we don't hear any whispers of anomalies in LHC data, we will have to brace for tough times ahead. With no energy increase in sight, slow experimental progress, and no theoretical hints for a better theory, particle physics as we know it will be in deep merde.

    You may protest this is too pessimistic. In principle, new physics may show up at the LHC anytime between this fall and the year 2030 when 3 inverse attobarns of data will have been accumulated. So the hope will not die completely anytime soon. However, the subjective probability of making a discovery will decrease exponentially as time goes on, as you can see in the attached plot. Without a discovery, the mood will soon plummet, resembling something of the late Tevatron, rather than the thrill of pushing the energy frontier that we're experiencing now.

    But for now, anything may yet happen. Cross your fingers.

    by Jester (noreply@blogger.com) at January 13, 2015 10:22 AM

    Sean Carroll - Preposterous Universe

    Dark Matter, Explained

    If you’ve ever wondered about dark matter, or been asked puzzled questions about it by your friends, now you have something to point to: this charming video by 11-year-old Lucas Belz-Koeling. (Hat tip Sir Harry Kroto.)

    The title references “Draw My Life style,” which is (the internet informs me) a label given to this kind of fast-motion photography of someone drawing on a white board.

    You go, Lucas. I doubt I would have been doing anything quite this good at that age.

    by Sean Carroll at January 13, 2015 02:29 AM