Particle Physics Planet


March 29, 2015

Geraint Lewis - Cosmic Horizons

Musings on academic careers - Part 1
As promised, I'm going to put down some thoughts on academic careers. In doing this, I should put my cards on the table and point out that while I am a full-time professor of astrophysics of the University of Sydney, I didn't really plan my career or following the musings given below. The musings come from take a hard look at the modern state of play in modern academia.

I am going to be as honest as possible, and surely some of my colleagues will disagree with my musings. Some people have a romantic view of many things, including science, and will trot out the line that science is somewhat distinct from people. That might be the case, but the act of doing science is clearly done my people, and that means all of the issues that govern human interactions come into play. It is important to remember this.

Now, there may be some lessons below for how to become a permanent academic, but there is no magic formula. But realising some of these lessons on what is at play may help.

Some of you may have heard me harp on about some of these issues before, but hopefully there is some new stuff as well. OK. Let's begin.

Career Management
It must be remembered that careers rarely just happen. Careers must be managed. I know some people hate to realise this, as science is supposed to be above all this career stuff - surely "good people" will be identified and rewarded!

Many students and postdocs seem to bumble along and only think of "what's next?" when they are up against the wire. I have spoken with students about the process of applying for postdocs, the long lead time needed, the requirement of at least three referees, all aspects of job hunting, and then, just moments from the submission of their PhD, they suddenly start looking for jobs. I weep a little when they frantically ask me "Who should I have as my third referee?"

Even if you are a brand-new PhD student, you need to think about career management. I don't mean planning, such as saying I will have a corner office in Harvard in 5 years (although there is nothing wrong with having aspirational goals!), but management. So, what do I mean?

Well, if you are interested in following a career in academia, then learn about the various stages and options involved and how you get from one to the other. This (and careers beyond academia) should be mandatory for new students, and reminded at all stages of your career that you need to keep thinking about it. What kind of things should you be doing at the various stages of your career? What experience would your next employer like you to have? It is very important to try and spot holes in your CV and fill them in; this is very important! If you know you have a weakness, don't ignore it, fix it.

Again, there is no magic formula to guarantee that you will be successful in moving from one stage to another, but you should be able to work out the kind of CV you need. If you are having difficulties in identifying these things, talk with people (get a mentor!).

And, for one final point, the person responsible for managing your career is you. Not your supervisor, not your parents, and not the non-existent gods of science. You are.

Being Strategic
This is part of your career management.

In the romantic vision of science, an academic is left to toddle along and be guided by their inquisitive nature to find out what is going on in the Universe. But academia does not work that way (no matter how much you want to rage against it). If you want an academic career, then it is essential to realise that you will be compared to your peers at some point. At some point, someone is is going to have a stack of CVs in front of them and will be going through them and will have to choose a subset who met the requirements for a position, and then rank those subset to find the best candidate. As part of your career management you need to understand what people are looking for! (I speak from experience of helping people prepare for jobs who know little about the actual job, the people offering it, what is needed etc etc).

I know people get very cross with this, but there are key indicators people look at, things like the number of papers, citation rates, grant income, student supervision, teaching experience. Again, at all points you need to ask "is there a hole in my CV?" and if there is, fill it! Do not ignore it.

But, you might be saying, how can I be strategic in all of this? I just get on with my work! You need to think about what you do. If you have a long running project, are there smaller projects you can do when waiting to spin out some short, punchy papers? Can I lead something that I will become world known in? Is there an idea I can spin to a student to make progress on? You should be thinking of "results" and results becoming talks at conferences and papers in journals.

If you are embarking on a new project, a project that is going to require substantial investment of time, you should ensure something will come from it, even if it is a negative or null result. You should never spend a substantial period of time, such as six months, and not have anything to show for it!

Are there collaborations you could forge and contribute to? Many people have done very well by being part of large collaborations, resulting in many papers, although, be aware that when seeing survey papers on a CV now as "well, what did this person contribute to the project?".

The flip-side is also important. Beware of spending to much time on activities that do not add to you CV! I have seen some, especially students, spending a lot of time on committees and jobs that really don't benefit them. Now, don't get me wrong. Committee work and supporting meetings etc is important, but think about where you are spending your time and ask yourself if your CV is suffering because of it.

How many hours should I work?
Your CV does not record the number of hours you work! It records your research output and successes. If you are publishing ten papers a year on four hour days, then wonderful, but if you are two years into a postdoc, working 80 hours per week and have not published anything, you might want to think about how you are using your time. 

But I am a firm believer of working smarter, not harder, and thinking and planning ideas and projects. Honestly, I have a couple of papers which (in a time before children) were born from ideas that crystalised over a weekend and submitted soon after. I am not super-smart, but do like to read widely, to go to as many talks as I can, to learn new things, and apply ideas to new problems.

One thing I have seen over and over again is people at various stages of their careers becoming narrower and narrower in their focus, and it depresses me when I go to talks in my own department and see students not attending. This narrowness, IMHO, does not help in establishing an academic career. This, of course, is not guaranteed, but when I look at CVs, I like to see breadth. 

So, number of hours is not really an important issue, your output is. Work hours do become important when you are a permanent academic because all the different things, especially admin and teaching you have to do, but as an early career researcher, it should not be the defining thing. Your output is. 

Is academia really for me?
I actually think this is a big one,  and is one which worries me as I don't think people at many stages of their career actually think about. Being a student is different to being an postdoctoral researcher, is different to being an academic, and it seems to be that people embarking on PhDs, with many a romantic notion about winning a Nobel prize somewhere along the way, don't really know what an "academic" is and what they do, just that it is some sort of goal.

In fact, this is such a big one, I think this might be a good place to stop and think about later musings.

by Cusp (noreply@blogger.com) at March 29, 2015 04:22 AM

Emily Lakdawalla - The Planetary Society Blog

Field Report from Mars: Sol 3971 - March 26, 2015
Opportunity reaches a marathon milestone—in more ways than one. Larry Crumpler reports on the current status of the seemingly unstoppable Mars rover.

March 29, 2015 12:47 AM

March 28, 2015

Christian P. Robert - xi'an's og

off to New York

I am off to New York City for two days, giving a seminar at Columbia tomorrow and visiting Andrew Gelman there. My talk will be about testing as mixture estimation, with slides similar to the Nice ones below if slightly upgraded and augmented during the flight to JFK. Looking at the past seminar speakers, I noticed we were three speakers from Paris in the last fortnight, with Ismael Castillo and Paul Doukhan (in the Applied Probability seminar) preceding me. Is there a significant bias there?!


Filed under: Books, pictures, Statistics, Travel, University life Tagged: Andrew Gelman, Bayesian hypothesis testing, Columbia University, finite mixtures, New York city, Nice, Paris, slides, SMILE seminar

by xi'an at March 28, 2015 11:15 PM

Lubos Motl - string vacua and pheno

Dark matter: Science Friday with Weinberg, Hooper, Cooley
The background is temporarily "nearly white" today because I celebrate the Kilowatt Hour, also known as the Electricity Thanksgiving Day. Between 8:30 and 9:30 pm local time, turn all your electric appliances on and try to surpass one kilowatt. By this $0.20 sacrifice, you will fight those who want to return us to the Middle Ages and who organize the so-called Earth Hour.

Ira Flatow's Science Friday belongs among the better or best science shows. Yesterday, he hosted some very interesting guests and the topic was interesting, too:
Understanding the Dark Side of Physics
The guests were Steven Weinberg, famous theorist and Nobel prize winner from Austin; Dan Hooper, a top Fermilab phenomenologist; and Judi Cooley, a senior experimental particle physicist from Dallas



And if you have 30 spare minutes, you should click the orange-white "play" button above and listen to this segment.




It doesn't just repeat some well-known old or medium-age things about dark matter. They start the whole conversation by discussing a very new story so that even listeners who are physicists may learn something new.




An observation was announced that imposes new upper limits on the self-interaction of dark matter. If it interacts with itself at all (it of course interacts gravitationally but if there is another contribution to its self-interaction), the strength of this force is smaller than an upper bound that is more constraining than those we knew before.

See e.g.
Hubble and Chandra discover dark matter is not as sticky as once thought
Dark matter does not slow down when colliding with each other, which means that it interacts with itself even less than previously thought.

The nongravitational interactions of dark matter in colliding galaxy clusters (by David Harvey+3, Science)
If you remember the "bullet cluster" that showed the existence of dark matter – and its separation from visible matter, they found 72 similar "clusters" and just like the 72 virgins waiting to rape a Muslim terrorist, all of them make the same suggestion: some dark matter is out there. They say that the certainty is now 7.6 sigma when these 72 observations are combined.

However, the dark matter location remains close enough to the associated visible stars which allows them to deduce, at 95% confidence level, that the cross section per unit mass isn't too high:\[

\frac{\sigma_{DM}}{m} \leq 0.47\,{\rm cm}^2 / {\rm g}

\] The dark matter just doesn't seem more excited by itself than it is by the visible matter. Theories with "dark photons" are the first ones that are heavily constrained and many natural ones are killed. But maybe even some more conventional WIMP theories may be punished.

I think that if you have worked on my proposed far-fetched idea of holographic MOND, you have one more reason to increase your activity. And I guess that all axions are just fine with the new finding.

Weinberg clarifies the situation – why dark matter isn't understood too well (it's dark!) etc. – very nicely but many other things are said in the show, too. When the two other guests join, they also discuss other dark matter experiments, dark energy, gravitational waves, string theory etc.

Funnily enough, a layman listener wanted the guests to describe the cataclysms that would occur if the dark matter hit the Earth. The response is, of course, that dark matter hits our bodies all the time and nothing at all happens most of the time. I can't resist to ask: Why would a layperson assume that dark matter must be associated with a "cataclysm"?

People have simply liked to think about cataclysms from the very beginning of primitive religions, and the would-be modern era encourages people to unscientifically attribute cataclysms to many things – carbon dioxide was the most popular "culprit" in the recent decade (and of course, there are many retarded people around us who still believe that CO2 emissions are dangerous). People just can't get interested in something if it is not hyped by a talk about catastrophes.

At one moment, Weinberg (who also promoted his new book about the history of physics, To Explain the World) wisely says that dark matter is preferred because it's also supported by some precision measurements of the CMB – and because it's much easier and more conservative to introduce a new particle species than to rewrite the laws of gravity. Flatow is laughing but it is a serious matter. Flatow is a victim of the populist delusion that there are so many particles which must mean that they were introduced because they don't have any natural enemies. But particles are introduced when they are seen or at least glimpsed.

Lots of particles are used by theoretical physicists because they are being seen experimentally every day and even the new particles that are not sharply seen yet are being introduced because they explain some observations or patterns in them – in this sense, the particles are being seen fuzzily or indirectly (at least when the theorist behind them has any quality). And all theories involving new particles compete with other theories involving other new particles so it's no "unrestricted proliferation of new concepts without standards". Instead, it's the business-as-usual science.

The real question is whether a rather conservative theory with new particle species is more likely – and ultimately more true – than some totally new radical theory that denies that physics may be described in terms of particles and fields. Of course that a true paradigm shift may be needed. But the evidence that it is so – or the ability of the existing, radically new frameworks to convince that they are on the right track – isn't strong enough (yet?) which is why it seems OK to assume that the discrepancies may be fixed with some new particle species.

Also, Flatow is laughing when Weinberg calls the visible matter a contamination – because it's significantly smaller than dark matter which is still smaller than dark energy (by the magnitude of the energy density). Most laymen would find this laughable, too, and it's because the anthropocentrism continues to be believed by most laymen:

We are at the center of the Universe and everything we know from the everyday life must play an essential role in the most profound structure of the Universe. But as science has been showing for 500 years or so, this simply ain't so. If I ignore the fact that the Czechs are the ultimate average nation in the world, we the humans are a random update to one of many long branches of the evolution tree that arose from some rather random complex molecules revolving around an element that is not the most fundamental one, and the whole visible matter around us is a contamination and the clump of matter where we live is a mediocre rock orbiting a rank-and-file star in an unspectacular galaxy – and the Universe itself may be (but doesn't have to be!) a rather random and "not special" solution of string theory within the landscape.

Hooper mentions the 1960s and 1970s as the golden era of classical physics – and the recent years were slower.

At the end, Cooley and Weinberg discuss string theory – experimenters can't test it so the theory isn't useful for them but it's right that people work on it, and it has never been the case that all predictions of theories had to (or could) be tested. Weinberg wraps the discussion with some historical examples – especially one involving Newton – proving that the principle that all interesting predictions must be testable in the near future is misguided.

The short discussion on sciencefriday.com is full of crackpots irritated by the very concept of "dark matter" and the research of dark matter.



Off-topic: One of the good 2015 Czech songs, "[I Am Not a] Robotic Kid" (lyrics preaches against parents' planning their kids' lives and against conformism). Well, I should say "Czech-Japanese songs" because the leader of Mirai, the band, is Mirai Navrátil – as the name shows, a textbook example of a Czech-Japanese hybrid. He actually plans to sing in Japanese as well. It's their first song.

by Luboš Motl (noreply@blogger.com) at March 28, 2015 04:04 PM

Peter Coles - In the Dark

Nature or Degree

telescoper:

A thoughtful post to follow on from yesterday’s reaction to the GermanWings tragedy…

Originally posted on Mental Health Cop:

It was the timing and tone of yesterday’s newspaper headlines that crossed the line for me: not any of discussion about mental health and airline safety. Of course, occupational health and fitness standards for pilots should be rigorous and we heard yesterday about annual testing, psychological testing, etc., etc.. By now, it may be easy to forget that when papers went to press on Thursday night, we still knew comparatively little about the pilot of the doomed flight. We certainly did not know that he appears to have ripped up sick notes that were relevant to the day of the crash or what kind of condition they related to – we still don’t, as the German police have not confirmed it. Whilst we did have suggestion that he had experience of depression and ‘burnout’ – whatever that means – we don’t know the nature or degree of this, do we?

View original 1,050 more words


by telescoper at March 28, 2015 01:24 PM

Emily Lakdawalla - The Planetary Society Blog

In Pictures: One-Year ISS Mission Begins
The one-year ISS mission of Scott Kelly and Mikhail Kornienko began with an early morning launch from Baikonur, Kazakhstan.

March 28, 2015 05:04 AM

Clifford V. Johnson - Asymptotia

Getty Visit
LAIH_luncheon_getty_2Every year the Los Angeles Institute for the Humanities has a luncheon at the Getty jointly with the Getty Research Institute and the LAIH fellows get to hang out with the Getty Scholars and people on the Getty Visiting Scholars program (Alexa Sekyra, the head of the program, was at the luncheon today, so I got to meet her). The talk is usually given by a curator of an exhibition or program that's either current, or coming up. The first time I went, a few years ago, it was the Spring before the launch of the Pacific Standard Time region-wide celebration of 35 years of Southern California art and art movements ('45-'80) that broke away from letting New York and Western Europe call the tunes and began to define some of the distinctive voices of their own that are now so well known world wide... then we had a talk from a group of curators about the multi-museum collaboration to make that happen. One of the things I learned today from Andrew Perchuck, the Deputy Director of the Getty Research Institute who welcomed us all in a short address, was that there will be a new Pacific Standard Time event coming up in 2018, so stay tuned. This time it will have more of a focus on Latino and Latina American art. See here. LAIH_luncheon_getty_1 Today we had Nancy Perloff tell us about the current exhibit (for which she is [...] Click to continue reading this post

by Clifford at March 28, 2015 02:13 AM

March 27, 2015

Christian P. Robert - xi'an's og

a most curious case of misaddressed mail

Today, I got two FedEx envelopes in the mail, both apparently from the same origin, namely UF Statistics department reimbursing my travel expenses. However, once both envelopes opened, I discovered that, while one was indeed containing my reimbursement cheque, the other one contained several huge cheques addressed to… a famous Nova Scotia fiddler, Natalie MacMaster, for concerts she gave recently in South East US, and with no possible connection with either me or the stats department! So I have no idea how those cheques came to me (before I returned them to their rightful recipient in Nova Scotia!). Complete mystery! The only possible link is that I just found Natalie MacMaster and her band played in Gainesville two weeks ago. Hence a potential scenario: at the local FedEx sorting centre, the envelope intended for Natalie MacMaster lost its label and someone took the second label from my then nearby envelope to avoid dealing with the issue…  In any case, this gave me the opportunity to listen to pretty enticing Scottish music!


Filed under: Books, Travel, University life Tagged: Cape Breton, FedEx, fiddle, Gainesville, Irish music, Naatalie MacMaster, Nova Scotia, Scotland, University of Florida

by xi'an at March 27, 2015 11:15 PM

Tommaso Dorigo - Scientificblogging

Another One Bites The Dust - WW Cross Section Gets Back Where It Belongs
Sometimes I think I am really lucky to have grown convinced that the Standard Model will not be broken by LHC results. It gives me peace of mind, detachment, and the opportunity to look at every new result found in disagreement with predictions with the right spirit - the "what's wrong with it ?" attitude that every physicist should have in his or her genes.

read more

by Tommaso Dorigo at March 27, 2015 10:22 PM

Emily Lakdawalla - The Planetary Society Blog

Ceres Gets Real; Pluto Lurks
Although we are still along way from understanding this fascinating little body, Ceres is finally becoming a real planet with recognizable features! And that's kinda cool.

March 27, 2015 09:10 PM

CERN Bulletin

Archives of the 90s - CERN Bulletin
Compilation d'archives des années 1990 pour le numéro anniversaire du Bulletin du CERN (50 ans).

by Journalist, Student at March 27, 2015 02:40 PM

CERN Bulletin

Archives of the 2000s - CERN Bulletin
Compilation d'archives des années 2000 pour le numéro anniversaire du Bulletin du CERN (50 ans).

by Journalist, Student at March 27, 2015 02:37 PM

CERN Bulletin

Archives of the 70s - CERN Bulletin
Compilation d'archives des années 1970 pour le numéro anniversaire du Bulletin du CERN (50 ans).

by Journalist, Student at March 27, 2015 02:24 PM

CERN Bulletin

Archives of the 80s - CERN Bulletin
Compilation d'archives des années 1980 pour le numéro anniversaire du Bulletin du CERN (50 ans).

by Journalist, Student at March 27, 2015 02:23 PM

Peter Coles - In the Dark

It’s Time to Change: Don’t Demonize Depression!

Like everyone else I was shocked and saddened on Tuesday to hear of the crash of an Airbus 320 (GermanWings Flight 4U 9525 from Barcelona to Dusseldorf)  in the French Alps.  That initial reaction turned to consternation and confusion when it appeared that flying conditions were good and no “Mayday” signal was sent for the eight minutes it steadily lost altitude until it hit the mountains., and then to complete incomprehension yesterday as evidence emerged that the crash, which resulted in the deaths of 150 people, appeared to have been the result of deliberate action by the co-pilot, Andreas Lubitz.  It seems that the co-pilot waited for the pilot to leave the cockpit to use the lavatory, then locked the door and proceeded to put the plane on a descending trajectory designed to take his own life along with everyone else on board. The horror of these events is beyond imagining. It’s also beyond imagining what could have possessed Andreas Lubitz to do such a terrible thing, for this was an act of mass murder.

Although it seems a paltry gesture, I’d like to take the opportunity to express by deepest condolences to the families, friends and loved ones of everyone who lost their life on that day, including Andreas Lubitz whose family must be experiencing pain on a scale the rest of us are completely unable to contemplate.

I’m not going to speculate at all about what drove this man to behave the way he did. I’m not qualified to comment and it would obviously not be helpful to anyone for me to do so.

That has not stopped the gutter press, however, who have seized upon the fact that Andreas Lubitz had a history of depressive illness to sell copies of their rags by labelling him “a madman” and splashing lurid details about his private life. A Daily Mail article (to which I refuse to link) clearly implies that anyone who has ever suffered from depression is potentially a psychopathic killer. Not for the first time, I am ashamed that people exist with so sensitivity that they could think this sort of journalism could ever be justified.

What this tragedy says to me is that only a better understanding of mental illness will help prevent similar things happening in future and that will not happen if the media continue to demonize those who suffer from depression and/or other mental health problems because the stigma that causes makes it so difficult to seek treatment. I know this for a fact. It is difficult enough to ask for help, even without  headlines screaming in your face from the front page of the Daily Fail or the Sun or even the Daily Telegraph.

I agree completely with Professor Sir Simon Wessely, President of the Royal College of Psychiatrists who is quoted in today’s Guardian as

The loss of the GermanWings Airbus is a ghastly horror. Until the facts are established, we should be careful not to rush judgements. Should it be the case that one pilot had a history of depression, we must bear in mind that so do several million people in this country.

It is also true that depression is usually treatable. The biggest barrier to people getting help is stigma and fear of disclosure. In this country we have seen a recent fall in stigma, an increase in willingness to be open about depression and most important of all, to seek help.

We do not yet know what might be the lessons of the loss of the Airbus, but we caution against hasty decisions that might make it more, not less, difficult for people with depression to receive appropriate treatment. This will not help sufferers, families or the public.

A conservative estimate is that about one in every four people in the UK suffers from depression at one time or another, many of whom struggle with mental illness without either asking for or receiving medical help. Help is there, but we need to much more to encourage people to use it.

Here’s another quote from Time to Change, for whose organization in Wales I wrote the piece linked above,

The terrible loss of life in the Germanwings plane crash is tragic, and we send our deepest sympathies to the families. Whilst the full facts are still emerging, there has been widespread media reporting speculating about the link with the pilot’s history of depression, which has been overly simplistic.

Clearly assessment of all pilots’ physical and mental health is entirely appropriate – but assumptions about risk shouldn’t be made across the board for people with depression, or any other illness. There will be pilots with experience of depression who have flown safely for decades and assessments should be made on a case by case basis.

Today’s headlines risk adding to the stigma surrounding mental health problems, which millions of people experience each year, and we would encourage the media to report this issue responsibly.

It is Time to Change attitudes to mental health, and a good place to start is to realise that it’s Time to Change how the media approach the subject. If you would like to complain about inappropriate reporting of mental health issues in the media then please follow the link here.


by telescoper at March 27, 2015 01:26 PM

Christian P. Robert - xi'an's og

likelihood-free model choice

Jean-Michel Marin, Pierre Pudlo and I just arXived a short review on ABC model choice, first version of a chapter for the incoming Handbook of Approximate Bayesian computation edited by Scott Sisson, Yannan Fan, and Mark Beaumont. Except for a new analysis of a Human evolution scenario, this survey mostly argues for the proposal made in our recent paper on the use of random forests and [also argues] about the lack of reliable approximations to posterior probabilities. (Paper that was rejected by PNAS and that is about to be resubmitted. Hopefully with a more positive outcome.) The conclusion of the survey is  that

The presumably most pessimistic conclusion of this study is that the connections between (i) the true posterior probability of a model, (ii) the ABC version of this probability, and (iii) the random forest version of the above, are at best very loose. This leaves open queries for acceptable approximations of (i), since the posterior predictive error is instead an error assessment for the ABC RF model choice procedure. While a Bayesian quantity that can be computed at little extra cost, it does not necessarily compete with the posterior probability of a model.

reflecting my hope that we can eventually come up with a proper approximation to the “true” posterior probability…


Filed under: Books, pictures, Statistics, University life, Wines Tagged: ABC, ABC model choice, Handbook of Approximate Bayesian computation, likelihood-free methods, Montpellier, PNAS, random forests, survey

by xi'an at March 27, 2015 01:18 PM

Symmetrybreaking - Fermilab/SLAC

Physics Madness: The Elemental Eight

Half the field, twice the fun. Which physics machine will win it all?

The first round of Physics Madness is over and the field has narrowed to eight amazing physics machines. The second round of voting is now open, so pick your favorites and send them on to the Fundamental Four.

You have until midnight PDT on Monday, March 30, to vote in this round. Come back on March 31 to see if your pick advanced and vote in the next round.

by Lauren Biron at March 27, 2015 01:00 PM

CERN Bulletin

Archives of the 60s - CERN Bulletin
Compilation d'archives des années 1960 pour le numéro anniversaire du Bulletin du CERN (50 ans).

by Journalist, Student at March 27, 2015 11:18 AM

Georg von Hippel - Life on the lattice

Workshop "Fundamental Parameters from Lattice QCD" at MITP (upcoming deadline)
Recent years have seen a significant increase in the overall accuracy of lattice QCD calculations of various hadronic observables. Results for quark and hadron masses, decay constants, form factors, the strong coupling constant and many other quantities are becoming increasingly important for testing the validity of the Standard Model. Prominent examples include calculations of Standard Model parameters, such as quark masses and the strong coupling constant, as well as the determination of CKM matrix elements, which is based on a variety of input quantities from experiment and theory. In order to make lattice QCD calculations more accessible to the entire particle physics community, several initiatives and working groups have sprung up, which collect the available lattice results and produce global averages.

The scientific programme "Fundamental Parameters from Lattice QCD" at the Mainz Institute of Theoretical Physics (MITP) is designed to bring together lattice practitioners with members of the phenomenological and experimental communities who are using lattice estimates as input for phenomenological studies. In addition to sharing the expertise among several communities, the aim of the programme is to identify key quantities which allow for tests of the CKM paradigm with greater accuracy and to discuss the procedures in order to arrive at more reliable global estimates.

The deadline for registration is Tuesday, 31 March 2015.

by Georg v. Hippel (noreply@blogger.com) at March 27, 2015 09:20 AM

Emily Lakdawalla - The Planetary Society Blog

Four Ideas to Bust the Floor on Outer Planet Mission Costs
The road to lower costs outer planet missions has been paved by NASA’s first two New Frontiers missions, the $700M New Horizons mission to Pluto and the $1.1B Juno mission to Jupiter. But can the cost of a mission to the outer solar system be cut to $450M, the limit for a Discovery mission?

March 27, 2015 01:25 AM

John Baez - Azimuth

A Networked World (Part 1)

guest post by David Spivak

The problem

The idea that’s haunted me, and motivated me, for the past seven years or so came to me while reading a book called The Moment of Complexity: our Emerging Network Culture, by Mark C. Taylor. It was a fascinating book about how our world is becoming increasingly networked—wired up and connected—and that this is leading to a dramatic increase in complexity. I’m not sure if it was stated explicitly there, but I got the idea that with the advent of the World Wide Web in 1991, a new neural network had been born. The lights had been turned on, and planet earth now had a brain.

I wondered how far this idea could be pushed. Is the world alive, is it a single living thing? If it is, in the sense I meant, then its primary job is to survive, and to survive it’ll have to make decisions. So there I was in my living room thinking, “oh my god, we’ve got to steer this thing!”

Taylor pointed out that as complexity increases, it’ll become harder to make sense of what’s going on in the world. That seemed to me like a big problem on the horizon, because in order to make good decisions, we need to have a good grasp on what’s occurring. I became obsessed with the idea of helping my species through this time of unprecedented complexity. I wanted to understand what was needed in order to help humanity make good decisions.

What seemed important as a first step is that we humans need to unify our understanding—to come to agreement—on matters of fact. For example, humanity still doesn’t know whether global warming is happening. Sure almost all credible scientists have agreed that it is happening, but does that steer money into programs that will slow it or mitigate its effects? This isn’t an issue of what course to take to solve a given problem; it’s about whether the problem even exists! It’s like when people were talking about Obama being a Muslim, born in Kenya, etc., and some people were denying it, saying he was born in Hawaii. If that’s true, why did he repeatedly refuse to show his birth certificate?

It is important, as a first step, to improve the extent to which we agree on the most obvious facts. This kind of “sanity check” is a necessary foundation for discussions about what course we should take. If we want to steer the ship, we have to make committed choices, like “we’re turning left now,” and we need to do so as a group. That is, there needs to be some amount of agreement about the way we should steer, so we’re not fighting ourselves.

Luckily there are a many cases of a group that needs to, and is able to, steer itself as a whole. For example as a human, my neural brain works with my cells to steer my body. Similarly, corporations steer themselves based on boards of directors, and based on flows of information, which run bureaucratically and/or informally between different parts of the company. Note that in neither case is there any suggestion that each part—cell, employee, or corporate entity—is “rational”; they’re all just doing their thing. What we do see in these cases is that the group members work together in a context where information and internal agreement is valued and often attained.

It seemed to me that intelligent, group-directed steering is possible. It does occur. But what’s the mechanism by which it happens, and how can we think about it? I figured that the way we steer, i.e., make decisions, is by using information.

I should be clear: whenever I say information, I never mean it “in the sense of Claude Shannon”. As beautiful as Shannon’s notion of information is, he’s not talking about the kind of information I mean. He explicitly said in his seminal paper that information in his sense is not concerned with meaning:

Frequently the messages have meaning; that is they refer to or are correlated according to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem. The significant aspect is that the actual message is one selected from a set of possible messages.

In contrast, I’m interested in the semantic stuff, which flows between humans, and which makes possible decisions about things like climate change. Shannon invented a very useful quantitative measure of meaningless probability distributions.

That’s not the kind of information I’m talking about. When I say “I want to know what information is”, I’m saying I want to formulate the notion of human-usable semantic meaning, in as mathematical a way as possible.

Back to my problem: we need to steer the ship, and to do so we need to use information properly. Unfortunately, I had no idea what information is, nor how it’s used to make decisions (let alone to make good ones), nor how it’s obtained from our interaction with the world. Moreover, I didn’t have a clue how the minute information-handling at the micro-level, e.g., done by cells inside a body or employees inside a corporation, would yield information-handling at the macro (body or corporate) level.

I set out to try to understand what information is and how it can be communicated. What kind of stuff is information? It seems to follow rules: facts can be put together to form new facts, but only in certain ways. I was once explaining this idea to Dan Kan, and he agreed saying, “Yes, information is inherently a combinatorial affair.” What is the combinatorics of information?

Communication is similarly difficult to understand, once you dig into it. For example, my brain somehow enables me to use information and so does yours. But our brains are wired up in personal and ad hoc ways, when you look closely, a bit like a fingerprint or retinal scan. I found it fascinating that two highly personalized semantic networks could interface well enough to effectively collaborate.

There are two issues that I wanted to understand, and by to understand I mean to make mathematical to my own satisfaction. The first is what information is, as structured stuff, and what communication is, as a transfer of structured stuff. The second is how communication at micro-levels can create, or be, understanding at macro-levels, i.e., how a group can steer as a singleton.

Looking back on this endeavor now, I remain concerned. Things are getting increasingly complex, in the sorts of ways predicted by Mark C. Taylor in his book, and we seem to be losing some control: of the NSA, of privacy, of people 3D printing guns or germs, of drones, of big financial institutions, etc.

Can we expect or hope that our species as a whole will make decisions that are healthy, like keeping the temperature down, given the information we have available? Are we in the driver’s seat, or is our ship currently in the process of spiraling out of our control?

Let’s assume that we don’t want to panic but that we do want to participate in helping the human community to make appropriate decisions. A possible first step could be to formalize the notion of “using information well”. If we could do this rigorously, it would go a long way toward helping humanity get onto a healthy course. Further, mathematics is one of humanity’s best inventions. Using this tool to improve our ability to use information properly is a non-partisan approach to addressing the issue. It’s not about fighting, it’s about figuring out what’s happening, and weighing all our options in an informed way.

So, I ask: What kind of mathematics might serve as a formal ground for the notion of meaningful information, including both its successful communication and its role in decision-making?


by John Baez at March 27, 2015 01:00 AM

March 26, 2015

Christian P. Robert - xi'an's og

importance weighting without importance weights [ABC for bandits?!]

I did not read very far in the recent arXival by Neu and Bartók, but I got the impression that it was a version of ABC for bandit problems where the probabilities behind the bandit arms are not available but can be generated. Since the stopping rule found in the “Recurrence weighting for multi-armed bandits” is the generation of an arm equal to the learner’s draw (p.5). Since there is no tolerance there, the method is exact (“unbiased”). As no reference is made to the ABC literature, this may be after all a mere analogy…


Filed under: Books, Statistics, University life Tagged: ABC, machine learning, multi-armed bandits, tolerance, Zurich

by xi'an at March 26, 2015 11:15 PM

Emily Lakdawalla - The Planetary Society Blog

LPSC 2015: Aeolian Processes on Mars and Titan
Planetary scientist Nathan Bridges reports on results from the Lunar and Planetary Science Conference about the action of wind on the surfaces of Mars and Titan.

March 26, 2015 09:05 PM

Peter Coles - In the Dark

How Arts Students Subsidise Science

Some time ago I wrote a blog post about the madness of the current fee regime in UK higher education. Here is a quote from that piece:

To give an example, I was talking recently to a student from a Humanities department at a leading University (not my employer). Each week she gets 3 lectures and one two-hour seminar, the latter  usually run by a research student. That’s it for her contact with the department. That meagre level of contact is by no means unusual, and some universities offer even less tuition than that. A recent report states that the real cost of teaching for Law and Sociology is less than £6000 per student, consistent with the level of funding under the “old” fee regime; teaching in STEM disciplines on the other hand actually costs over £11k. What this means, in effect, is that Arts and Humanities students are cross-subsidising STEM students. That’s neither fair nor transparent.

Now here’s a nice graphic from the Times Higher that demonstrates the extent to which Science students are getting a much better deal than those in the Arts and Humanities.

Subsidy

The problem with charging fees relating to the real cost of studying the subject concerned is that it will deter students from doing STEM disciplines and cause even greater numbers to flock into cheaper subjects (which where much of the growth in the HE sector over the last decade has actually taken place in any case). However, the diagram shows how absurd the current system (of equal fee regardless of subject really is), and it’s actually quite amazing that more Arts students haven’t twigged what is going on. The point is that they are (unwittingly) subsidising their colleagues in STEM subjects. I think it would be much fairer if that subsidy were provided directly from the taxpayer via HEFCE otherwise there’s a clear incentive for universities to rake in cash from students on courses that are cheap to teach, rather than to provide a proper range of courses across the entire curriculum. Where’s the incentive to bother teaching, e.g., Physics at all in the current system?

I re-iterate my argument from a few weeks ago that the Labour Party’s pledge to reduce fees to £6K across all disciplines would result in a much fairer and justifiable system, as long as there was a direct subsidy from the government to make good the shortfall (of around £6K per annum per student in Physics, for example).


by telescoper at March 26, 2015 04:51 PM

Peter Coles - In the Dark

Quantum Technology and the Frontier of Computing

Here’s a short video I just found featuring our own Winfried Hensinger, Professor of Quantum Technologies at the University of Sussex.

It’s part of a pilot documentary that explores the connection between science fiction and science reality. Here is the official blurb:

“The science fiction genre has a history of playing with our imagination; inventing “impossible” technologies and concepts such as time travel and teleportation. The “spooky” discoveries that quantum physicists have recently made are challenging the very “impossibility” of sci-fi. This documentary will explore the ways in which sci-fi has catalysed the imagination of scientists who are pioneering these discoveries.

The theme will explore the causal link between science and science fiction, using the inner workings of the quantum computer that Dr Winfried Hensinger is currently developing as a case study. Dr Hensinger, the head of the Sussex Ion Quantum Technology research group, was inspired early on by the well known 60s science fiction television show Star Trek. Having led multiple breakthroughs in the field of Quantum Computing research, he speaks to the importance of not losing our imagination, citing his childhood desire to be the science officer on Star Trek’s Enterprise as the prime motivator of going into the scientific field. Exploring the relationship between the human beings developing this technology and the non-human genre of science fiction, we will demonstrate that the boundaries between imagination and reality are blurrier than conventionally thought.”

I’ll take the opportunity presented by this video to remind you that the University of Sussex is the only university in the UK to offer an MSc course in Quantum Technologies, and this year there are special bursaries that make this an extremely attractive  option for students seeking to extend their studies into this burgeoning new area. We’ve already seen a big surge in applications for this course this course so if you’re thinking of applying don’t wait too long or it might fill up!


by telescoper at March 26, 2015 01:16 PM

Symmetrybreaking - Fermilab/SLAC

Better ‘cosmic candles’ to illuminate dark energy

Using a newly identified set of supernovae, researchers have found a way to measure distances in space twice as precisely as before.

Researchers have more than doubled the precision of a method they use to measure long distances in space—the same one that led to the discovery of dark energy.

In a paper published in Science, researchers from the University of California, Berkeley, SLAC National Accelerator Laboratory, the Harvard-Smithsonian Center for Astrophysics and Lawrence Berkeley National Laboratory explain that the improvement allows them to measure astronomical distances with an uncertainty of less than 4 percent.

The key is a special type of Type Ia supernovae.

Type Ia supernovae are thermonuclear explosions of white dwarfs—the very dense remnants of stars that have burned all of their hydrogen fuel. A Type Ia supernova is believed to be triggered by the merger or interaction of the white dwarf with an orbiting companion star.

“For a couple of weeks, a Type Ia supernova becomes increasingly bright before it begins to fade,” says Patrick Kelly, the new study’s lead author from the University of California, Berkeley. “It turns out that the rate at which it fades tells us about the absolute brightness of the explosion.”

If the absolute brightness of a light source is known, its observed brightness can be used to calculate its distance from the observer. This is similar to a candle, whose light appears fainter the farther away it is. That’s why Type Ia supernovae are also referred to as astronomical “standard candles.”

The 2011 Nobel Prize in Physics went to a trio of scientists who used these standard candles to determine that our universe is expanding at an accelerating rate. Scientists think this is likely caused by an unknown form of energy they call dark energy.

Measurements using these cosmic candles are far from perfect, though. For reasons that are not yet understood, the distances inferred from supernova explosions seem to be systematically linked to the environments the supernovae are located in. For instance, the mass of the host galaxy appears to have an effect of 5 percent.

In the new study, Kelly and his colleagues describe a set of Type Ia supernovae that allow distance measurements that are much less dependent on such factors.  Using data from NASA’s GALEX satellite, the Sloan Digital Sky Survey and the Kitt Peak National Observatory, they determined that supernovae located in host galaxies that are rich in young stars yield much more precise distances. 

The scientists also have a likely explanation for the extraordinary precision. “It appears that the corresponding white dwarfs were fairly young when they exploded,” Kelly says. “This relatively small spread in age may cause this particular set of Type Ia supernovae to be more uniform.”

For their study, the scientists analyzed almost 80 supernovae that, on average, were 400 million light years away. On an astronomical scale, this is a relatively short distance, and light emitted by these sources stems from rather recent cosmic times.

“An exciting prospect for our analysis is that it can be easily applied to Type Ia supernovae in larger distances—an approach that will let us analyze distances more accurately as we go further back in time,” Kelly says.

This knowledge, in turn, may help researchers draw a more precise picture of the expansion history of the universe and could provide crucial clues about the physics behind the ever increasing speed at which the cosmos expands. 

The intense ultraviolet emission from stars within a circle surrounding these supernovae (shown in white) reveals the presence of hot, massive stars and suggests that the supernovae result from the disruption of comparatively young white dwarf stars.

Courtesy of: Patrick Kelly/University of California, Berkeley

 

Like what you see? Sign up for a free subscription to symmetry!

by Manuel Gnida at March 26, 2015 01:00 PM

astrobites - astro-ph reader's digest

Jupiter is my shepherd that I shall not want

Title: Jupiter’s Decisive Role t’sn the Inner Solar System’s Early Evolution
Authors: Konstantin Batygin and Gregory Laughlin
First Author Institution: Division of Geological and Planetary Sciences, California Institute of Technology, 1200 E. California Blvd., Pasadena, CA 91125, USA
Status: Submitted to Proceedings of the National Academy of Sciences of the United States of America

Since the discovery of the first extra-solar planet around another star in 1995 we now know of ~ 4000 candidate planets in our galaxy. All of these discoveries have been key in improving our understanding of the formation of both planets and star systems as a whole. However a full explanation of the processes which form star systems is still elusive, including a description of the formation of our very own Solar System.

The problem with all these exoplanets and star systems we’ve discovered so far is that they suggest that the Solar System is just weirdMost other systems seem to have massive planets similar to the size and mass of Neptune but which are the distance of Mercury to the Sun from their own star (often these planets can be as big as Jupiter – since these are the easiest for us to detect). For example the famous Kepler-11 system is extremely compact, with 6 planets with a total mass of about 40 Earth masses all within 0.5 AU (astronomical unit – the distance of the Earth from the Sun) orbiting around a G-type star not at all dissimilar from the Sun.

Figure 1: Diagram showing the size and distribution of the Kepler detected exoplanets with a mass less than Jupiter and within the orbit of Mars. The radial distance is plotted logarithmically. The orbits of the terrestrial solar planets are also shown. Figure 1 in Batygin & Laughlin.

Figure 1: Diagram showing the size and distribution of the Kepler detected exoplanets with a mass less than Jupiter and within the orbit of Mars. The radial distance is plotted logarithmically. The orbits of the terrestrial solar planets are also shown. Figure 1 in Batygin & Laughlin.

Figure 1 shows all the Kepler detected planets with masses less than Jupiter within the orbit of Mars from their own star. So if most other star systems seem to be planet mass heavy close into their star – why is the Solar System so mass poor and the Sun so alone in the centre?

The authors of this paper use simulations of how the orbital parameters of different objects in systems change due to the influence of other objects, to test the idea that Jupiter could have migrated inwards from the initial place it formed to somewhere between the orbits of Mars and Earth (~ 1.5 AU). The formation of Saturn, during Jupiter’s migration, is thought to have had a massive gravitational influence on Jupiter and consequently pulled it back out to its present day position.

If we think first about how star systems form, the most popular theory is the core-accretion theory, where material around a star condenses into a protoplanetary disc from which planets form from the bottom up. Small grains of dust collide and stick together forming small rocks, then in turn planetesimals and so on until a planet sized mass is formed. So we can imagine Jupiter encountering an army of planetesimals as it migrated inwards. The gravitational effects, perturbations and resonances between the orbits of the planetesimals and Jupiter ultimately work to cause the planetesimals to migrate inwards towards the Sun. The simulations in this paper show that with some simplifying assumptions the total amount of mass that could be swept up and inwards towards the Sun by Jupiter is ~10-20 Earth masses.

Not only are the orbital periods of these planetesimals affected but their orbital eccentricities (how far from circular the orbit is) are also increased. This means that within that army of planetesimals there’s now alot more occasions where two orbits might cross initiating the inevitable cascade of collisions which grind down each planetesimal into smaller and smaller chunks over time. Figure 2 shows how the simulations predict this for planetesimals as Jupiter migrates inwards.

Figure 2: Evolution of the eccentricity of planetesimals in the Solar System due to the orbital migration of Jupiter. Each planetesimal is colour coded according to it's initial conditions. Figure 2a in Batygin & Laughlin.

Figure 2: Evolution of the eccentricity of planetesimals in the Solar System due to the orbital migration of Jupiter. Each planetesimal is colour coded according to its initial conditions. Figure 2a in Batygin & Laughlin.

Given the large impact frequency expected in a rather old protoplanetary disc where Jupiter and Saturn have already formed, the simulations suggest that a large fraction, if not all, of the planetesimals affected by Jupiter will quickly fall inwards to the Sun, especially after Jupiter reverses its migration direction. This decay in the orbits is shown in Figure 3 with each planetesimal getting steadily closer to the Sun until they are consumed by it.

Figure 3: The decay of the orbital radius of planetesimals over time due to the presence of Jupiter's migration. The planetesimals are shown by the coloured lines and the planets from the Kepler-11 system are shown by the black lines. This plot shows how if a planet such as Jupiter migrated in the Kepler-11 system each of the 6 planets would end up destroyed by the Sun.  Figure 3 in Batygin & Laughlin.

Figure 3: The decay of the orbital radius of planetesimals over time due to Jupiter’s migration. The planetesimals are shown by the coloured lines and the planets from the Kepler-11 system are shown by the black lines. This plot shows how if a planet such as Jupiter migrated inwards in the Kepler-11 system, each of the 6 planets would end up destroyed by the parent star. Figure 3 in Batygin & Laughlin.

The orbital wanderings of Jupiter inferred from these simulations might explain the lack of present-day high mass planets close to the Sun. The planetesimals that survived the collisions and inwards migration may have been few and far between, only being able to coalesce to form smaller rocky planets like Earth.

The next step for this theory is to test it, on another star system similar to our own with giant planets with orbital periods exceeding 100 days. However our catalogue of exoplanets is not complete enough to provide such a test yet. Finding these large planets at such large radii from their star is difficult because their long orbital periods coincide with how often we have the chance of observing a transit. For example if we wanted to detect a Neptune-like planet in a Neptune-like orbit, a transit would only occur every 165 years. Also, detecting small planets close to a star is also difficult as the current telescope sensitivities don’t allow us to detect the change in the light of a star for planets so small.

So perhaps we just haven’t been looking long enough or with good enough equipment to find star systems like ours. However with missions  like GAIA, TESS and K2 in the near future perhaps we’ll find that the Solar System is maybe not as unique as we think.

by Becky Smethurst at March 26, 2015 11:12 AM

Clifford V. Johnson - Asymptotia

Framed Graphite
framed_graphiteIt took a while, but I got this task done. (Click for a slightly larger view.)Things take a lot longer these days, because...newborn. You'll recall that I did a little drawing of the youngster very soon after his arrival in December. Well, it was decided a while back that it should be on display on a wall in the house rather than hide in my notebooks like my other sketches tend to do. This was a great honour, but presented me with difficulty. I have a rule to not take any pages out of my notebooks. You'll think it is nuts, but you'll find that this madness is shared by many people who keep notebooks/sketchbooks. Somehow the whole thing is a Thing, if you know what I mean. To tear a page out would be a distortion of the record.... it would spoil the archival aspect of the book. (Who am I kidding? I don't think it likely that future historians will be poring over my notebooks... but I know that future Clifford will be, and it will be annoying to find a gap.) (It is sort of like deleting comments from a discussion on a blog post. I try not to do that without good reason, and I leave a trail to show that it was done if I must.) Anyway, where was I? Ah. Pages. Well, I had to find a way of making a framed version of the drawing that kept the spirit and feel of the drawing intact while [...] Click to continue reading this post

by Clifford at March 26, 2015 04:51 AM

March 25, 2015

Christian P. Robert - xi'an's og

the maths of Jeffreys-Lindley paradox

75b18-imagedusparadrapCristiano Villa and Stephen Walker arXived on last Friday a paper entitled On the mathematics of the Jeffreys-Lindley paradox. Following the philosophical papers of last year, by Ari Spanos, Jan Sprenger, Guillaume Rochefort-Maranda, and myself, this provides a more statistical view on the paradox. Or “paradox”… Even though I strongly disagree with the conclusion, namely that a finite (prior) variance σ² should be used in the Gaussian prior. And fall back on classical Type I and Type II errors. So, in that sense, the authors avoid the Jeffreys-Lindley paradox altogether!

The argument against considering a limiting value for the posterior probability is that it converges to 0, 21, or an intermediate value. In the first two cases it is useless. In the medium case. achieved when the prior probability of the null and alternative hypotheses depend on variance σ². While I do not want to argue in favour of my 1993 solution

\rho(\sigma) = 1\big/ 1+\sqrt{2\pi}\sigma

since it is ill-defined in measure theoretic terms, I do not buy the coherence argument that, since this prior probability converges to zero when σ² goes to infinity, the posterior probability should also go to zero. In the limit, probabilistic reasoning fails since the prior under the alternative is a measure not a probability distribution… We should thus abstain from over-interpreting improper priors. (A sin sometimes committed by Jeffreys himself in his book!)


Filed under: Books, Kids, Statistics Tagged: Bayesian tests of hypotheses, Capitaine Haddock, Dennis Lindley, Harold Jeffreys, improper priors, Jeffreys-Lindley paradox, model posterior probabilities, Tintin

by xi'an at March 25, 2015 11:15 PM

arXiv blog

Physicists Describe New Class of Dyson Sphere

Physicists have overlooked an obvious place to search for shell-like structures constructed around stars by advanced civilizations to capture their energy.


Back in 1960, the physicist Freeman Dyson publish an unusual paper in the journalScience entitled “Search for Artificial Stellar Sources of Infra-red Radiation.” In it, he outlined a hypothetical structure that entirely encapsulates a star to capture its energy, which has since become known as a Dyson sphere.

March 25, 2015 09:47 PM

arXiv blog

An Emerging Science of Clickbait

Researchers are teasing apart the complex set of links between the virality of a Web story and the emotions it generates.

March 25, 2015 08:06 PM

Quantum Diaries

The dawn of DUNE

This article appeared in symmetry on March 25, 2015.

A powerful planned neutrino experiment gains new members, new leaders and a new name. Image: Fermilab

A powerful planned neutrino experiment gains new members, new leaders and a new name. Image: Fermilab

The neutrino experiment formerly known as LBNE has transformed. Since January, its collaboration has gained about 50 new member institutions, elected two new spokespersons and chosen a new name: Deep Underground Neutrino Experiment, or DUNE.

The proposed experiment will be the most powerful tool in the world for studying hard-to-catch particles called neutrinos. It will span 800 miles. It will start with a near detector and an intense beam of neutrinos produced at Fermi National Accelerator Laboratory in Illinois. It will end with a 10-kiloton far detector located underground in a laboratory at the Sanford Underground Research Facility in South Dakota. The distance between the two detectors will allow scientists to study how neutrinos change as they zip at close to the speed of light straight through the Earth.

“This will be the flagship experiment for particle physics hosted in the US,” says Jim Siegrist, associate director of high-energy physics for the US Department of Energy’s Office of Science. “It’s an exciting time for neutrino science and particle physics generally.”

In 2014, the Particle Physics Project Prioritization Panel identified the experiment as a top priority for US particle physics. At the same time, it recommended the collaboration take a few steps back and invite more international participation in the planning process.

Physicist Sergio Bertolucci, director of research and scientific computing at CERN, took the helm of an executive board put together to expand the collaboration and organize the election of new spokespersons.

DUNE now includes scientists from 148 institutions in 23 countries. It will be the first large international project hosted by the US to be jointly overseen by outside agencies.

This month, the collaboration elected two new spokespersons: André Rubbia, a professor of physics at ETH Zurich, and Mark Thomson, a professor of physics at the University of Cambridge. One will serve as spokesperson for two years and the other for three to provide continuity in leadership.

Rubbia got started with neutrino research as a member of the NOMAD experiment at CERN in the ’90s. More recently he was a part of LAGUNA-LBNO, a collaboration that was working toward a long-baseline experiment in Europe. Thomson has a long-term involvement in US-based underground and neutrino physics. He is the DUNE principle investigator for the UK.

Scientists are coming together to study neutrinos, rarely interacting particles that constantly stream through the Earth but are not well understood. They come in three types and oscillate, or change from type to type, as they travel long distances. They have tiny, unexplained masses. Neutrinos could hold clues about how the universe began and why matter greatly outnumbers antimatter, allowing us to exist.

“The science is what drives us,” Rubbia says. “We’re at the point where the next generation of experiments is going to address the mystery of neutrino oscillations. It’s a unique moment.”

Scientists hope to begin installation of the DUNE far detector by 2021. “Everybody involved is pushing hard to see this project happen as soon as possible,” Thomson says.

Jennifer Huber and Kathryn Jepsen

Image: Fermilab

Image: Fermilab

by Fermilab at March 25, 2015 05:46 PM

Symmetrybreaking - Fermilab/SLAC

The dawn of DUNE

A powerful planned neutrino experiment gains new members, new leaders and a new name.

The neutrino experiment formerly known as LBNE has transformed. Since January, its collaboration has gained about 50 new member institutions, elected two new spokespersons and chosen a new name: Deep Underground Neutrino Experiment, or DUNE.

The proposed experiment will be the most powerful tool in the world for studying hard-to-catch particles called neutrinos. It will span 800 miles. It will start with a near detector and an intense beam of neutrinos produced at Fermi National Accelerator Laboratory in Illinois. It will end with a 10-kiloton far detector located underground in a laboratory at the Sanford Underground Research Facility in South Dakota. The distance between the two detectors will allow scientists to study how neutrinos change as they zip at close to the speed of light straight through the Earth.

“This will be the flagship experiment for particle physics hosted in the US,” says Jim Siegrist, associate director of high-energy physics for the US Department of Energy’s Office of Science. “It’s an exciting time for neutrino science and particle physics generally.”

In 2014, the Particle Physics Project Prioritization Panel identified the experiment as a top priority for US particle physics. At the same time, it recommended the collaboration take a few steps back and invite more international participation in the planning process.

Physicist Sergio Bertolucci, director of research and scientific computing at CERN, took the helm of an executive board put together to expand the collaboration and organize the election of new spokespersons.

DUNE now includes scientists from 148 institutions in 23 countries. It will be the first large international project hosted by the US to be jointly overseen by outside agencies.

This month, the collaboration elected two new spokespersons: André Rubbia, a professor of physics at ETH Zurich, and Mark Thomson, a professor of physics at the University of Cambridge. One will serve as spokesperson for two years and the other for three to provide continuity in leadership.

Rubbia got started with neutrino research as a member of the NOMAD experiment at CERN in the ’90s. More recently he was a part of LAGUNA-LBNO, a collaboration that was working toward a long-baseline experiment in Europe. Thomson has a long-term involvement in US-based underground and neutrino physics. He is the DUNE principle investigator for the UK.

Scientists are coming together to study neutrinos, rarely interacting particles that constantly stream through the Earth but are not well understood. They come in three types and oscillate, or change from type to type, as they travel long distances. They have tiny, unexplained masses. Neutrinos could hold clues about how the universe began and why matter greatly outnumbers antimatter, allowing us to exist.

“The science is what drives us,” Rubbia says. “We’re at the point where the next generation of experiments is going to address the mystery of neutrino oscillations. It’s a unique moment.”

Scientists hope to begin installation of the DUNE far detector by 2021. “Everybody involved is pushing hard to see this project happen as soon as possible,” Thomson says. 

Courtesy of: Fermilab

 

Like what you see? Sign up for a free subscription to symmetry!

by Jennifer Huber and Kathryn Jepsen at March 25, 2015 01:00 PM

Peter Coles - In the Dark

One Fine Conformal Transformation

It’s been a while since I posted a cute physics problem, so try this one for size. It is taken from a book of examples I was given in 1984 to illustrate a course on Physical Applications of Complex Variables I took during the a 4-week course I took in Long Vacation immediately prior to my third year as an undergraduate at Cambridge.  Students intending to specialise in Theoretical Physics in Part II of the Natural Sciences Tripos (as I was) had to do this course, which lasted about 10 days and was followed by a pretty tough test. Those who failed the test had to switch to Experimental Physics, and spend the rest of the summer programme doing laboratory work, while those who passed it carried on with further theoretical courses for the rest of the Long Vacation programme. I managed to get through, to find that what followed wasn’t anywhere near as tough as the first bit. I inferred that Physical Applications of Complex Variables was primarily there in order to separate the wheat from the chaff. It’s always been an issue with Theoretical Physics courses that they attract two sorts of student: one that likes mathematical work and really wants to do theory, and another that hates experimental physics slightly more than he/she hates everything else. This course, and especially the test after it, was intended to minimize the number of the second type getting into Part II Theoretical Physics.

Another piece of information that readers might find interesting is that the lecturer for Physical Applications of Complex Variables was a young Mark Birkinshaw, now William P. Coldrick Professor of Cosmology and Astrophysics at the University of Bristol.

As it happens, this term I have been teaching a module on Theoretical Physics to second-year undergraduates at the University of Sussex. This covers many of the topics I studied at Cambridge in the second year, including the calculus of variations, relativistic electrodynamics, Green’s functions and, of course, complex functions. In fact I’ve used some of the notes I took as an undergraduate, and have kept all these years, to prepare material for my own lectures. I am pretty adamant therefore that the academic level at which we’re teaching this material now is no lower than it was thirty years ago.

Anyway, here’s a typically eccentric problem from the workbook, from a set of problems chosen to illustrate applications of conformal transformations (which I’ve just finished teaching this term). See how you get on with it. The first correct answer submitted through the comments box gets a round of applaud.

conformal transformation

 


by telescoper at March 25, 2015 12:23 PM

Quantum Diaries

Vote LUX, and give an underdog a chance

I’ve had a busy few weeks after getting back from America, so apologies for the lack of blogging! Some things I’ve been up to:
– Presenting my work on LUX to MPs at the Houses of Parliament for the SET for Britain competition. No prizes, but lots of interesting questions from MPs, for example: “and what can you do with dark matter once you find it?”. I think he was looking for monetary gain, so perhaps I should have claimed dark matter will be the zero-carbon fuel of the future!
– Supplementing my lowly salary by marking an enormous pile of undergraduate problem sheets and by participating in paid eye-tracking studies for both the UCL psychology department and a marketing company
– The usual work on analysing LUX data and trying to improve our sensitivity to low mass dark matter.
And on Saturday, I will be on a panel of “experts” (how this has happened I don’t know) giving a talk as part of the UCL Your Universe festival. The discussion is aptly titled “Light into the Dark: Mystery of the Invisible Universe”, and if you’re in London and interested in this sort of thing, you should come along. Free tickets are available here.

I will hopefully be back to posting more regularly now, but first, a bit of promotion!

Symmetry Magazine are running a competition to find “which physics machine will reign supreme” and you can vote right here.

Symmetry Magazine's

Physics Madness: Symmetry Magazine’s tournament to find the champion physics experiment

The first round matches LUX with the LHC, and considering we are a collaboration of just over 100 (compared to CERN’s thousands of scientists) with nothing like the media coverage the LHC gets, we’re feeling like a bit of an underdog.
But you can’t just vote for us because we’re an underdog, so here are some reasons you should #voteLUX:

-For spin-dependent WIMP-nucleon scattering for WIMPs above ~8GeV, LUX is 10,000x more sensitive than the LHC (see figure below).
-LUX cost millions of dollars, the LHC cost billions.
-It’s possible to have an understanding of how LUX works in its entirety. The LHC is too big and has too many detectors for that!
-The LHC is 175m underground. LUX is 1,478m underground, over 8x deeper, and so is much better shielded from cosmic rays.
-The LHC has encountered problems both times it has tried to start up. LUX is running smoothly right now!
-I actually feel kind of bad now, because I like the LHC, so I will stop.

Dark matter sensitivity limits

Dark matter sensitivity limits, comparing LHC results to LUX in red. The x axis is the mass of the dark matter particle, and the y axis is its interaction probability. The smaller this number, the greater the sensitivity.

Anyway, if you fancy giving the world’s most sensitive dark matter detector a hint of a chance in it’s battle against the behemoth LHC, vote LUX. Let’s beat the system!

by Sally Shaw at March 25, 2015 11:27 AM

Lubos Motl - string vacua and pheno

CMS: a 2.9-sigma \(WH\) hint at \(1850\GeV\)


Unfortunately, due to a short circuit somewhere at the LHC, a small metallic piece will have to be removed – which takes a week (it's so slow because CERN employs LEGO men to do the job) – and the 2015 LHC physics run may be postponed by up to 5 weeks because of that.
Wolfram: You have the last week to buy Mathematica at a 25% discount (a "pi day" celebration; student edition). Edward Measure has already happily bought it.
Meanwhile, ATLAS and CMS have flooded their web pages with new papers resulting from the 2012 run. In most of these papers, the Standard Model gets an "A".




It is not really the case of the CMS' note
Search for massive \(WH\) resonances decaying to \(\ell\nu b\bar b\) final state in the boosted regime at \(\sqrt{s}=8\TeV\)
because a local 2.9-sigma excess is seen in the muon subchannel – see Figures 5, 6b, and 7 – for the mass of a new hypothetical charged particle \(1.8\TeV\leq m_{W'} \leq 1.9 \TeV\).




It's a small excess – the confidence level gets reduced to about 2 sigma with the look-elsewhere correction – but this new hypothetical charged particle could be interpreted within the Littlest Higgs model (theory) or a Heavy Vector Triplet model, among other, perhaps more likely ones.

In the (now) long list of LHC anomalies mentioned at this blog, some of them could look similar, especially the \(2.1 \TeV\) right-handed \(W_R^\pm\)-boson (CMS July 2014) and and the strange effective-mass \(1.65\TeV\) events (ATLAS March 2012).

by Luboš Motl (noreply@blogger.com) at March 25, 2015 08:47 AM

March 24, 2015

astrobites - astro-ph reader's digest

The First Star Clusters

Title: The Luminosity of Population III Star Clusters

Authors: Alexander L. DeSouza and Shantanu Basu

First Author’s Institution: Department of Physics and Astronomy, University of Western Ontario

Status: Accepted by MNRAS

First light

A major goal for the next generation of telescopes, such as the James Webb Space Telescope (JWST) is to study the first stars and galaxies in the universe. But what would they look like? Would JWST be able to see them? Recent studies have suggested that even the most massive specimens of the very first generation of stars, known as Population III stars, may be undetectable with JWST.

But not all hope is lost–one of the reasons why Population III stars are so hard to detect is that, unlike later generations of stars, they are believed to form in isolation. Later generations of stars (called Population I and Population II stars) usually form in clusters, from the fragmentation of large clouds of molecular gas. On the other hand, cosmological simulations have suggested that Population III stars would form from gas collected in dark matter mini-halos of about a million solar masses in size which would have virialized (reached dynamic equilibrium) by redshifts of about 20-50. Molecular hydrogen acts as a coolant in this scenario, allowing the gas to cool enough to condense down into a star. Early simulations showed that gravitational fragmentation would eventually produce one massive fragment–on the order of about a hundred solar masses–per halo.  This molecular hydrogen, however, could easily be destroyed by the UV radiation from the first massive star formed, preventing others from forming from the same parent cloud of gas. While Population III stars in this paradigm are thought to be much more massive than later generations of stars, they would also be isolated from other ancient stars.

However, there is a lot of uncertainty about the masses of these first stars, and recent papers have investigated the possibility that the picture could be more complicated than first thought. The molecular gas in the dark matter mini-halos could experience more fragmentation before it reaches stellar density, which may lead to multiple smaller stars, rather than one large one, forming from the same cloud of gas. These stars could then evolve relatively independently of each other. The authors of today’s paper investigate the idea that Population III stars could have formed in clusters and also study the luminosity of the resulting groups of stars.

Methodology

Screenshot 2015-03-17 08.03.28

Figure 1 from the paper showing the evolution of a single protostar in time steps of 5 kyr. The leftmost image shows the protostar and its disk at 5 kyr after the formation of the protostar. Some fragments can be seen at radii of 10 AU to several hundred AU. They can then accrete onto the protostar in bursts of accretion. The middle time step shows a quiescent phase. There are no fragments within 300 AU of the disk and no new ones are forming so the disk is relatively smooth–the ones that already exist were formed during an earlier phase raised to higher orbits. The right most image shows the system at 15 kyr from the formation of the protostar, showing how some of the larger fragments can be sheared apart and produce large fluctuations in the luminosity of the protostar as they are accreted.

The authors of today’s paper begin by arguing that the pristine, mostly atomic gas that collects in these early dark matter mini-halos could fragment by the Jeans criterion in a manner similar to the giant molecular clouds that we see today. This fragmentation would produce small clusters of stars that are relatively isolated from each other, so they are able to model each of the members in the cluster independently. They do this by using numerical hydrodynamical simulations in the thin-disk limit.

Their fiducial model is a gas of 300 solar masses, about 0.5 pc in radius, and at a temperature of 300 K. They find that the disk that forms around the protostars (the large fragments of gas that have contracted out of the original cloud of gas) forms relatively quickly, within about 3 kyr of the formation of the protostar. The disk begins to fragment a few hundred years after it forms. These clumps can then accrete onto the protostar in bursts of accretion or get raised to higher orbits.

Most of the time, however, the protostar is in a quiescent phase and is accreting mass relatively smoothly. The luminosity of the overall star cluster increases during the bursts of accretion, and it also increases as new protostars are formed. The increasing luminosity of the stellar cluster can make it more difficult to detect single accretion events. For clusters of a moderate size of about 16 members, these competing effects result in the star cluster spending about 15% of its time at an elevated luminosity, sometimes even a 1000 times the quiescent luminosity. The star clusters can then have luminosities approaching and occasionally exceeding 108 solar luminosities. Population III stars with masses ranging from 100-500 solar masses on the other hand, are likely to have luminosities of about 106 to 107.

These clusters would be some of the most luminous objects at these redshifts and would make a good target for telescopes such as ALMA and JWST. We have few constraints on the star formation rates at such high redshifts, and a lot of uncertainty in what the earliest stars would look like. So should these exist, even if we couldn’t see massive individual population III stars, we may still be able to detect these clusters of smaller stars and gain insight into what star formation looked like at the beginning of our universe.

by Caroline Huang at March 24, 2015 10:57 PM

John Baez - Azimuth

Stationary Stability in Finite Populations

guest post by Marc Harper

A while back, in the article Relative entropy minimization in evolutionary dynamics, we looked at extensions of the information geometry / evolutionary game theory story to more general time-scales, incentives, and geometries. Today we’ll see how to make this all work in finite populations!

Let’s recall the basic idea from last time, which John also described in his information geometry series. The main theorem is this: when there’s an evolutionarily stable state for a given fitness landscape, the relative entropy between the stable state and the population distribution decreases along the population trajectories as they converge to the stable state. In short, relative entropy is a Lyapunov function. This is a nice way to look at the action of a population under natural selection, and it has interesting analogies to Bayesian inference.

The replicator equation is a nice model from an intuitive viewpoint, and it’s mathematically elegant. But it has some drawbacks when it comes to modeling real populations. One major issue is that the replicator equation implicitly assumes that the population proportions of each type are differentiable functions of time, obeying a differential equation. This only makes sense in the limit of large populations. Other closely related models, such as the Lotka-Volterra model, focus on the number of individuals of each type (e.g. predators and prey) instead of the proportion. But they often assume that the number of individuals is a differentiable function of time, and a population of 3.5 isn’t very realistic either.

Real populations of replicating entities are not infinitely large; in fact they are often relatively small and of course have whole numbers of each type, at least for large biological replicators (like animals). They take up space and only so many can interact meaningfully. There are quite a few models of evolution that handle finite populations and some predate the replicator equation. Models with more realistic assumptions typically have to leave the realm of derivatives and differential equations behind, which means that the analysis of such models is more difficult, but the behaviors of the models are often much more interesting. Hopefully by the end of this post, you’ll see how all of these diagrams fit together:








One of the best-known finite population models is the Moran process, which is a Markov chain on a finite population. This is the quintessential birth-death process. For a moment consider a population of just two types A and B. The state of the population is given by a pair of nonnegative integers (a,b) with a+b=N, the total number of replicators in the population, and a and b the number of individuals of type A and B respectively. Though it may artificial to fix the population size N, this often turns out not to be that big of a deal, and you can assume the population is at its carrying capacity to make the assumption realistic. (Lots of people study populations that can change size and that have replicators spatially distributed say on a graph, but we’ll assume they can all interact with each whenever they want for now).

A Markov model works by transitioning from state to state in each round of the process, so we need to define the transitions probabilities to complete the model. Let’s put a fitness landscape on the population, given by two functions f_A and f_B of the population state (a,b). Now we choose an individual to reproduce proportionally to fitness, e.g. we choose an A individual to reproduce with probability

\displaystyle{ \frac{a f_A}{a f_A + b f_B} }

since there are a individuals of type A and they each have fitness f_A. This is analogous to the ratio of fitness to mean fitness from the discrete replicator equation, since

\displaystyle{ \frac{a f_A}{a f_A + b f_B} =  \frac{\frac{a}{N} f_A}{\frac{a}{N} f_A + \frac{b}{N} f_B} \to \frac{x_i f_i(x)}{\overline{f(x)}} }

and the discrete replicator equation is typically similar to the continuous replicator equation (this can be made precise), so the Moran process captures the idea of natural selection in a similar way. Actually there is a way to recover the replicator equation from the Moran process in large populations—details at the end!

We’ll assume that the fitnesses are nonnegative and that the total fitness (the denominator) is never zero; if that seems artificial, some people prefer to transform the fitness landscape by e^{\beta f(x)}, which gives a ratio reminiscent of the Boltzmann or Fermi distribution from statistical physics, with the parameter \beta playing the role of intensity of selection rather than inverse temperature. This is sometimes called Fermi selection.

That takes care of the birth part. The death part is easier: we just choose an individual at random (uniformly) to be replaced. Now we can form the transition probabilities of moving between population states. For instance the probability of moving from state (a,b) to (a+1, b-1) is given by the product of the birth and death probabilities, since they are independent:

\displaystyle{ T_a^{a+1} = \frac{a f_A}{a f_A + b f_B} \frac{b}{N} }

since we have to chose a replicator of type A to reproduce and one of type B to be replaced. Similarly for (a,b) to (a-1, b+1) (switch all the a’s and b’s), and we can write the probability of staying in the state (a, N-a) as

T_a^{a} = 1 - T_{a}^{a+1} - T_{a}^{a-1}

Since we only replace one individual at a time, this covers all the possible transitions, and keeps the population constant.

We’d like to analyze this model and many people have come up with clever ways to do so, computing quantities like fixation probabilities (also known as absorption probabilities), indicating the chance that the population will end up with one type completely dominating, i.e. in state (0, N) or (N,0). If we assume that the fitness of type A is constant and simply equal to 1, and the fitness of type B is r \neq 1, we can calculate the probability that a single mutant of type B will take over a population of type A using standard Markov chain methods:

\displaystyle{\rho = \frac{1 - r^{-1}}{1 - r^{-N}} }

For neutral relative fitness (r=1), \rho = 1/N, which is the probability a neutral mutant invades by drift alone since selection is neutral. Since the two boundary states (0, N) or (N,0) are absorbing (no transitions out), in the long run every population ends up in one of these two states, i.e. the population is homogeneous. (This is the formulation referred to by Matteo Smerlak in The mathematical origins of irreversibility.)

That’s a bit different flavor of result than what we discussed previously, since we had stable states where both types were present, and now that’s impossible, and a bit disappointing. We need to make the population model a bit more complex to have more interesting behaviors, and we can do this in a very nice way by adding the effects of mutation. At the time of reproduction, we’ll allow either type to mutate into the other with probability \mu. This changes the transition probabilities to something like

\displaystyle{ T_a^{a+1} = \frac{a (1-\mu) f_A + b \mu f_B}{a f_A + b f_B} \frac{b}{N} }

Now the process never stops wiggling around, but it does have something known as a stationary distribution, which gives the probability that the population is in any given state in the long run.

For populations with more than two types the basic ideas are the same, but there are more neighboring states that the population could move to, and many more states in the Markov process. One can also use more complicated mutation matrices, but this setup is good enough to typically guarantee that no one species completely takes over. For interesting behaviors, typically \mu = 1/N is a good choice (there’s some biological evidence that mutation rates are typically inversely proportional to genome size).

Without mutation, once the population reached (0,N) or (N,0), it stayed there. Now the population bounces between states, either because of drift, selection, or mutation. Based on our stability theorems for evolutionarily stable states, it’s reasonable to hope that for small mutation rates and larger populations (less drift), the population should spend most of its time near the evolutionarily stable state. This can be measured by the stationary distribution which gives the long run probabilities of a process being in a given state.

Previous work by Claussen and Traulsen:

• Jens Christian Claussen and Arne Traulsen, Non-Gaussian fluctuations arising from finite populations: exact results for the evolutionary Moran process, Physical Review E 71 (2005), 025101.

suggested that the stationary distribution is at least sometimes maximal around evolutionarily stable states. Specifically, they showed that for a very similar model with fitness landscape given by

\left(\begin{array}{c} f_A \\ f_B \end{array}\right)  = \left(\begin{array}{cc} 1 & 2\\ 2&1 \end{array}\right)  \left(\begin{array}{c} a\\ b \end{array}\right)

the stationary state is essentially a binomial distribution centered at (N/2, N/2).

Unfortunately, the stationary distribution can be very difficult to compute for an arbitrary Markov chain. While it can be computed for the Markov process described above without mutation, and in the case studied by Claussen and Traulsen, there’s no general analytic formula for the process with mutation, nor for more than two types, because the processes are not reversible. Since we can’t compute the stationary distribution analytically, we’ll have to find another way to show that the local maxima of the stationary distribution are “evolutionarily stable”. We can approximate the stationary distribution fairly easily with a computer, so it’s easy to plot the results for just about any landscape and reasonable population size (e.g. N \approx 100).

It turns out that we can use a relative entropy minimization approach, just like for the continuous replicator equation! But how? We lack some essential ingredients such as deterministic and differentiable trajectories. Here’s what we do:

• We show that the local maxima and minima of the stationary distribution satisfy a complex balance criterion.

• We then show that these states minimize an expected relative entropy.

• This will mean that the current state and the expected next state are ‘close’.

• Lastly, we show that these states satisfy an analogous definition of evolutionary stability (now incorporating mutation).

The relative entropy allows us to measure how close the current state is to the expected next state, which captures the idea of stability in another way. This ports the relative minimization Lyapunov result to some more realistic Markov chain models. The only downside is that we’ll assume the populations are “sufficiently large”, but in practice for populations of three types, N=20 is typically enough for common fitness landscapes (there are lots of examples here for N=80, which are prettier than the smaller populations). The reason for this is that the population state (a,b) needs enough “resolution” (a/N, b/N) to get sufficiently close to the stable state, which is not necessarily a ratio of integers. If you allow some wiggle room, smaller populations are still typically pretty close.

Evolutionarily stable states are closely related to Nash equilibria, which have a nice intuitive description in traditional game theory as “states that no player has an incentive to deviate from”. But in evolutionary game theory, we don’t use a game matrix to compute e.g. maximum payoff strategies, rather the game matrix defines a fitness landscape which then determines how natural selection unfolds.

We’re going to see this idea again in a moment, and to help get there let’s introduce an function called an incentive that encodes how a fitness landscape is used for selection. One way is to simply replace the quantities a f_A(a,b) and b f_B(a,b) in the fitness-proportionate selection ratio above, which now becomes (for two population types):

\displaystyle{ \frac{\varphi_A(a,b)}{\varphi_A(a,b) + \varphi_B(a,b)} }

Here \varphi_A(a,b) and \varphi_B(a,b) are the incentive function components that determine how the fitness landscape is used for natural selection (if at all). We have seen two examples above:

\varphi_A(a,b) = a f_A(a, b)

for the Moran process and fitness-proportionate selection, and

\varphi_A(a,b) = a e^{\beta f_A(a, b)}

for an alternative that incorporates a strength of selection term \beta, preventing division by zero for fitness landscapes defined by zero-sum game matrices, such as a rock-paper-scissors game. Using an incentive function also simplifies the transition probabilities and results as we move to populations of more than two types. Introducing mutation, we can describe the ratio for incentive-proportion selection with mutation for the ith population type when the population is in state x=(a,b,\ldots) / N as

\displaystyle{ p_i(x) = \frac{\sum_{k=1}^{n}{\varphi_k(x) M_{i k} }}{\sum_{k=1}^{n}{\varphi_k(x)}} }

for some matrix of mutation probabilities M. This is just the probability that we get a new individual of the ith type (by birth and/or mutation). A common choice for the mutation matrix is to use a single mutation probability \mu and spread it out over all the types, such as letting

M_{ij} = \mu / (n-1)

and

M_{ii} = 1 - \mu

Now we are ready to define the expected next state for the population and see how it captures a notion of stability. For a given state population x in a multitype population, using x to indicate the normalized population state (a,b,\ldots) / N, consider all the neighboring states y that the population could move to in one step of the process (one birth-death cycle). These neighboring states are the result of increasing a population type by one (birth) and decreasing another by one (death, possibly the same type), of course excluding cases on the boundary where the number of individuals of any type drops below zero or rises above N. Now we can define the expected next state as the sum of neighboring states weighted by the transition probabilities

E(x) = \sum_{y}{y T_x^{y}}

with transition probabilities given by

T_{x}^{y} = p_{i}(x) x_{j}

for states y that differ in 1/N at the ith coordinate and -1/N at jth coordinate from x. Here x_j is just the probability of the random death of an individual of the jth type, so the transition probabilities are still just birth (with mutation) and death as for the Moran process we started with.

Skipping some straightforward algebraic manipulations, we can show that

\displaystyle{ E(x) = \sum_{y}{y T_x^{y}} = \frac{N-1}{N}x + \frac{1}{N}p(x)}

Then it’s easy to see that E(x) = x if and only if x = p(x), and that x = p(x) if and only if x_i = \varphi_i(x). So we have a nice description of ‘stability’ in terms of fixed points of the expected next state function and the incentive function

x = E(x) = p(x) = \varphi(x),

and we’ve gotten back to “no one has an incentive to deviate”. More precisely, for the Moran process

\varphi_i(x) = x_i f_i(x)

and we get back f_i(x) = f_j(x) for every type. So we take x = \varphi(x) as our analogous condition to an evolutionarily stable state, though it’s just the ‘no motion’ part and not also the ‘stable’ part. That’s what we need the stationary distribution for!

To turn this into a useful number that measures stability, we use the relative entropy of the expected next state and the current state, in analogy with the Lyapunov theorem for the replicator equation. The relative entropy

\displaystyle{ D(x, y) = \sum_i x_i \ln(x_i) - y_i \ln(x_i) }

has the really nice property that D(x,y) = 0 if and only if x = y, so we can use the relative entropy D(E(x), x) as a measure of how close to stable any particular state is! Here the expected next state takes the place of the ‘evolutionarily stable state’ in the result described last time for the replicator equation.

Finally, we need to show that the maxima (and minima) of of the stationary distribution are these fixed points by showing that these states minimize the expected relative entropy.

Seeing that local maxima and minima of the stationary distribution minimize the expected relative entropy is a more involved, so let’s just sketch the details. In general, these Markov processes are not reversible, so they don’t satisfy the detailed-balance condition, but the stationary probabilities do satisfy something called the global balance condition, which says that for the stationary distribution s we have that

s_x \sum_{x}{T_x^{y}} = \sum_{y}{s_y T_y^{x}}

When the stationary distribution is at a local maximum (or minimum), we can show essentially that this implies (up to an \epsilon, for a large enough population) that

\displaystyle{\sum_{x}{T_x^{y}} = \sum_{y}{T_y^{x}} }

a sort of probability inflow-outflow equation, which is very similar to the condition of complex balanced equilibrium described by Manoj Gopalkrishnan in this Azimuth post. With some algebraic manipulation, we can show that these states have E(x)=x.

Now let’s look again at the figures from the start. The first shows the vector field of the replicator equation:

You can see rest points at the center, on the center of each boundary edge, and on the corner points. The center point is evolutionarily stable, the center points of the boundary are semi-stable (but stable when the population is restricted to a boundary simplex), and the corner points are unstable.

This one shows the stationary distribution for a finite population model with a Fermi incentive on the same landscape, for a population of size 80:

A fixed population size gives a partitioning of the simplex, and each triangle of the partition is colored by the value of the stationary distribution. So you can see that there are local maxima in the center and on the centers of the triangle boundary edges. In this case, the size of the mutation probability determines how much of the stationary distribution is concentrated on the center of the simplex.

This shows one-half of the Euclidean distance squared between the current state and the expected next state:

And finally, this shows the same thing but with the relative entropy as the ‘distance function':

As you can see, the Euclidean distance is locally minimal at each of the local maxima and minima of the stationary distribution (including the corners); the relative entropy is only guaranteed so on the interior states (because the relative entropy doesn’t play nicely with the boundary, and unlike the replicator equation, the Markov process can jump on and off the boundary). It turns out that the relative Rényi entropies for q between 0 and 1 also work just fine, but for the large population limit (the replicator dynamic), the relative entropy is the somehow the right choice for the replicator equation (has the derivative that easily gives Lyapunov stability), which is due to the connections between relative entropy and Fisher information in the information geometry of the simplex. The Euclidean distance is the q=0 case and the ordinary relative entropy is q=1.

As it turns out, something very similar holds for another popular finite population model, the Wright–Fisher process! This model is more complicated, so if you are interested in the details, check out our paper, which has many nice examples and figures. We also define a process that bridges the gap between the atomic nature of the Moran process and the generational nature of the Wright–Fisher process, and prove the general result for that model.

Finally, let’s see how the Moran process relates back to the replicator equation (see also the appendix in this paper), and how we recover the stability theory of the replicator equation. We can use the transition probabilities of the Moran process to define a stochastic differential equation (called a Langevin equation) with drift and diffusion terms that are essentially (for populations with two types:

\mathrm{Drift}(x) = T^{+}(x) - T^{-}(x)

\displaystyle{ \mathrm{Diffusion}(x) = \sqrt{\frac{T^{+}(x) + T^{-}(x)}{N}} }

As the population size gets larger, the diffusion term drops out, and the stochastic differential equation becomes essentially the replicator equation. For the stationary distribution, the variance (e.g. for the binomial example above) also has an inverse dependence on N, so the distribution limits to a delta-function that is zero except for at the evolutionarily stable state!

What about the relative entropy? Loosely speaking, as the population size gets larger, the iteration of the expected next state also becomes deterministic. Then the evolutionarily stable states is a fixed point of the expected next state function, and the expected relative entropy is essentially the same as the ordinary relative entropy, at least in a neighborhood of the evolutionarily stable state. This is good enough to establish local stability.

Earlier I said both the local maxima and minima minimize the expected relative entropy. Dash and I haven’t proven that the local maxima always correspond to evolutionarily stable states (and the minima to unstable states). That’s because the generalization of evolutionarily stable state we use is really just a ‘no motion’ condition, and isn’t strong enough to imply stability in a neighborhood for the deterministic replicator equation. So for now we are calling the local maxima stationary stable states.

We’ve also tried a similar approach to populations evolving on networks, which is a popular topic in evolutionary graph theory, and the results are encouraging! But there are many more ‘states’ in such a process, since the configuration of the network has to be taken into account, and whether the population is clustered together or not. See the end of our paper for an interesting example of a population on a cycle.


by John Baez at March 24, 2015 06:00 PM

Symmetrybreaking - Fermilab/SLAC

LHC will not restart this week

Engineers and technicians may need to warm up and recool a section of the accelerator before they can introduce particles.

The Large Hadron Collider will not restart this week, according to a statement from CERN.

Engineers and technicians are investigating an intermittent short circuit to ground in one of the machine’s magnet circuits. They identified the problem during a test run on March 21. It is a well understood issue, but one that could take time to resolve since it is in a cold section of the machine. The repair process may require warming up and re-cooling that part of the accelerator.

“Any cryogenic machine is a time amplifier,” says CERN’s Director for Accelerators, Frédérick Bordry, “so what would have taken hours in a warm machine could end up taking us weeks.”

Current indications suggest a delay of between a few days and several weeks. CERN's press office says a revised schedule will be announced as soon as possible.

The other seven of the machine’s eight sectors have successfully been commissioned to the 2015 operating energy of 6.5 trillion electron-volts per beam.

According to the statement, the impact on LHC operation will be minimal: 2015 is a year for fully understanding the performance of the upgraded machine with a view to full-scale physics running in 2016 through 2018.

“All the signs are good for a great Run II,” says CERN Director General Rolf Heuer. “In the grand scheme of things, a few weeks delay in humankind’s quest to understand our universe is little more than the blink of an eye.”

 

Like what you see? Sign up for a free subscription to symmetry!

 

by Kathryn Jepsen at March 24, 2015 05:43 PM

arXiv blog

Spacecraft Traveling Close to Light Speed Should Be Visible with Current Technology, Say Engineers

Relativistic spacecraft must interact with the cosmic microwave background in a way that produces a unique light signature. And that means we should be able to spot any nearby, according to a new analysis.

March 24, 2015 03:00 PM

Symmetrybreaking - Fermilab/SLAC

Physics Madness: The Supersymmetric Sixteen

Which physics machine will reign supreme? Your vote decides.

Editor's Note: This round has closed; move on to the Elemental Eight!

 

March is here, and that means one thing: brackets. We’ve matched up 16 of the coolest pieces of particle physics equipment that help scientists answer big questions about our universe. Your vote will decide this year’s favorite.
     The tournament will last four rounds, starting with the Supersymmetric Sixteen today, moving on to the Elemental Eight on March 27, then the Fundamental Four on March 31 and finally the Grand Unified Championship on April 3. The first round’s match-ups are below. You have until midnight PDT on Thursday, March 26, to vote in this round. May the best physics machine win!

 

by Lauren Biron at March 24, 2015 01:00 PM

March 23, 2015

astrobites - astro-ph reader's digest

The Radial Velocity Method: Current and Future Prospects

To date, we have confirmed more than 1500 extrasolar planets, with over 3300 other planet candidates waiting to be confirmed. These planets have been found with different methods (see Figure 1). The two currently most successful are: the transit method and the radial velocity method. The former measures the periodic dimming of a star as an orbiting planet passes in front of it, and tends to find short-period large-radius planets. The latter works like this: as a planet orbits around its host star, the planet tugs the host star causing the star to move in its own tiny orbit. This wobble motion —which increases with increasing planet mass— can be detected as tiny shifts in the star’s spectra. We just found a planet.

That being said, in our quest to find even more exoplanets, where do we invest our time and money? Do we pick one method over another? Or do we spread our efforts, striving to advance all of them simultaneously? How do we assess how each of them is working; how do we even begin? Here it pays off to take a stand, to make some decisions on how to proceed, to set realistic and achievable goals, to define a path forward that the exoplanet community can agree to follow.

Figure 1: Currently confirmed planets (from December 2014), showing planetary masses as a function of period. To date, the radial velocity method (red), and the transit method (green), are by far the most successful planet-finding techniques. Figure 42 from the report.

Figure 1: Currently confirmed planets (as of December 2014), showing planetary masses as a function of period. To date, the radial velocity method (red), and the transit method (green), are by far the most successful planet-finding techniques. Other methods include: microlensing, imaging, transit timing variations, and orbital brightness modulation. Figure 42 from the report.

To do this effectively, and to ensure that the US exoplanet community has a plan, NASA’s Exoplanet Exploration Program (ExEP) appoints a so-called Program Analysis Group (ExoPAG). This group is responsible for coordinating community input into the development and execution of NASA’s exoplanetary goals, and serves as a forum to analyze its priorities for future exoplanetary exploration. Most of ExoPAG’s work is conducted in a number of Study Analysis Groups (SAGs). Each group focuses on one specific exoplanet topic, and is headed by some of the leading scientists in the corresponding sub-topic. These topics include: discussing future flagship missions, characterizing exoplanet atmospheres, and analyzing individual detection techniques and their future. A comprehensive list of the different SAGs is maintained here.

One of the SAGs focused their efforts on analyzing the current and future prospects of the radial velocity method. Recently, the group published an analysis report which discusses the current state-of-affairs of the radial velocity technique, and recommends future steps towards increasing its sensitivity. Today’s astrobite summarizes this report.

The questions this SAG studied can roughly be divided into three categories:

1-2: Radial velocity detection sensitivity is primarily limited by two categories of systematic effects. First, by long-term instrument stability, and second, by astrophysical sources of jitter.

3:  Finding planets with the radial velocity technique requires large amounts of observing time. We thus have to account for what telescopes are available, and how we design effective radial velocity surveys.

We won’t talk so much about the last category in this astrobite. But, let’s dive right into the former two.

Instrumentation Objectives

No instrument is perfect. All instruments have something that ultimately limits their sensitivity. We can make more sensitive measurements with a ruler if we make the tick-marks denser. Make the tick-marks too dense, and we can’t tell them apart. Our sensitivity is limited.

Astronomical instruments that measure radial velocities —called spectrographs— are, too, limited in sensitivity. Their sensitivity is to a large extent controlled by how stable they are over long periods of time. Various environmental factors —such as mechanical vibrations, thermal variations, and pressure changes— cause unwanted shifts in the stellar spectra, that can all masquerade as a radial velocity signal. Minimize such variations, and work on correcting —or calibrating out— the unwanted signals they cause, and we increase the sensitivity. Not an easy job.

Figure 2: Mass of planets detected with the radial velocity technique as a function of year detected. More planets are being found each year, hand-in-hand with increasing instrument sensitivity. Figure 43 from the report.

Figure 2: Masses of planets detected with the radial velocity technique, as a function of their discovery year. More planets are being found each year, hand-in-hand with increasing instrument sensitivity. For transiting planets the actual masses are plotted, otherwise the minimum mass is plotted. Figure 43 from the report.

Still, it can be done, and we are getting better at it. Figure 2 shows that we are finding lighter and lighter planets — hand-in-hand with increasing instrument sensitivity: we are able to detect smaller and smaller wobble motions. Current state-of-the-art spectrographs are, in the optical, sensitive down to 1m/s wobble motions, and only slightly worse (1-3m/s) in the Near Infrared. To set things in perspective, the Earth exerts a 9cm/s wobble on the Sun. Thus, to find true Earth analogs, we need instruments sensitive to a few centimeters. The authors of the report note that achieving 10-20cm/s instrument precision is realistic within a few years — some such instruments are even being developed as we speak. Further push on these next generation spectrographs is strongly recommended by the authors; they support a path towards finding Earth analogues.

Science Objectives

Having a perfect spectrograph, with perfect precision, would, however, not solve the whole problem. This is due to stellar jitter: the star itself can produce signals that can wrongly be interpreted as planets. Our ultimate sensitivity or precision is constrained by our physical understanding of the stars we observe.

Stellar jitter originates from various sources. The sources have different timescales, ranging from minutes and hours (e.g. granulation), to days and months (e.g. star spots), and even up to years (e.g. magnetic activity cycles). Figure 3 gives a good overview of the main sources of stellar jitter. Many of the sources are understood, and can be mitigated (green boxes), but other signals still pose problems (red boxes), and require more work. The blue boxes are more or less solved. We would like to see more green boxes.

Figure 3:An overview diagram of stellar jitter that affects radial velocity measurements. Note the different timescales. Green boxes denote a largely understood problem, but the red boxes require more work. Blue boxes are somewhere in between. Figure 44 from the report.

Figure 3: An overview diagram of stellar jitter that affects radial velocity measurements. Note the different timescales. Green boxes denote an understood problem, but the red boxes require significant more work. Blue boxes are somewhere in between. Figure 44 from the report.

Conclusion

The radial velocity method is one way to discover and characterize exoplanets. In this report, one of NASA’s Study Analysis Groups evaluates the current status of the method. Moreover, with input from the exoplanet community, the group discusses recommendations to move forward, to ensure that this method continues to be workhorse method in finding and characterizing exoplanets. This will involve efficiently scheduled observatories, and significant investments in technology development (see a great list of current and future spectrographs here), data analysis and in our understanding of the astrophysics behind stellar jitter. With these efforts, we make steps towards discovering and characterizing true Earth analogs.

Full Disclosure: My adviser is one of the authors of the SAG white paper report. I chose to cover it here for two reasons. First, I wanted to further your insight into this exciting subfield, and secondly, likewise, my own.

by Gudmundur Stefansson at March 23, 2015 11:37 PM

ZapperZ - Physics and Physicists

This Tennis Act Disproves Physics?!
Since when?!

Why is it that when some "morons" see something that they can't comprehend, they always claim that it violates physics, or can't be explained by physics, as IF they understand physics well-enough to make such judgement? I mean, c'mon!

This is the case here where this writer claims that Novak Djokovic ability to stop the ball coming at him with his racket somehow defy physics and turning it all into "a lie". (

Look, I know this is written probably in jest, and probably without giving it a second thought, but such stupid comments of journalism should really be stopped and called out. There's nothing that can't be explained here by physics. If Djokovic had held the racket with a stiff arm, he would not have been able to stop the ball the way he did. In fact, it would have bounced off the racket. But look at how he stopped it. He moved his arm back to "absorb" the impact, basically allowing the strings to absorb the momentum of the ball. This is called "Impulse", where the force on the ball to change its momentum to zero is spread out over a longer period of time. Thus, the force needed to slow it down is small enough that it doesn't cause it to bounce off the strings.

In other words, what is observed can easily be explained by physics!

BTW, Martina Navratilova had done this same thing a few times while she was an active player. I've witness her doing this at least twice during matches. So it is not as if this is anything new. Not only that, although it is less spectacular and easier to do, badminton players do such a thing numerous times as well when they are trying to catch a shuttlecock.

Zz.

by ZapperZ (noreply@blogger.com) at March 23, 2015 05:04 PM

arXiv blog

Twitter Data Mining Reveals the Origins of Support for Islamic State

Studying the pre-Islamic State tweets of people who end up backing the organization paints a revealing picture of how support emerges, say computer scientists.


Back in May 2014, news emerged that an Egyptian man called Ahmed Al-Darawy had died on the battlefields of Iraq while fighting for the Islamic State of Iraq and the Levant, otherwise known as Islamic State or ISIS.

March 23, 2015 03:05 PM

Tommaso Dorigo - Scientificblogging

Spring Flukes: New 3-Sigma Signals From LHCb And ATLAS
Spring is finally in, and with it the great expectations for a new run of the Large Hadron Collider, which will restart in a month or so with a 62.5% increase in center of mass energy of the proton-proton collisions it produces: 13 TeV. At 13 TeV, the production of a 2-TeV Z' boson, say, would not be so terribly rare, making a signal soon visible in the data that ATLAS and CMS are eager to collect.

read more

by Tommaso Dorigo at March 23, 2015 11:41 AM

March 21, 2015

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

A day out (and a solar eclipse) at Maynooth University

I  had a most enjoyable day on Friday at the mathematical physics department of Maynooth University, or NUI Maynooth, to give it its proper title. I was there to attend an international masterclass in particle physics. This project, a few years old, is a superb science outreach initiative associated with CERN, the European Centre for Particle Physics particle in Geneva, home of the famous Large Hadron Collider (LHC). If you live on planet earth, you will probably have heard that a famous particle known as the Higgs boson was recently discovered at the LHC. The idea behind the masterclasses is to give secondary school students the opportunity to “become a particle physicists for a day” by performing measurements on real data from CERN.

IMG_20150320_160728

The day got off to a great start with a lecture on “Quarks, leptons and the forces of nature” by Dr. Paul Watts, a theoretical physicist at Maynooth. An excellent introduction to the world pf particle physics, I was amused by Paul’s cautious answer to a question on the chances of finding supersymmetric particles at the LHC. What the students didn’t know was that Paul studied under the late Bruno Zumino, a world expert on supersymmetry, and one of the pioneers of the theory. Paul’s seminar was followed by another lecture, “Particle Physics Experiments and the Little Bang” , an excellent a talk on the detection of particles at the LHC by Dr Jonivar Skullerud, another physicist at Maynooth. In between the two lectures. we all trooped outside in the hope of seeing something of today’s solar eclipse . I was not hopeful, given that the sky was heavily overcast until about 9.30. Lo and behold, the skies cleared in time and we all got a ringside view of the event through glasses supplied by Maynooth physics department! Now that’s how you impress visitors to the college… images IMG_20150320_101922

Viewing the eclipse

After lunch we had the workshop proper. Each student was assigned a computer on which software had been installed that allowed them to analyse particle events from the ALICE detector at the LHC (lead ion collisions). Basically, the program allowed the students to measure the momentum and energy of decay products of particles from the tracks produced in collisions, allowing them to calculate the mass of the parent particle and thus identify it. As so often, I was impressed how quickly the students got the hang of the program – having missed the introduction thanks to a meeting, I was by far the slowest in the room. We all then submitted our results, only to find a large discrepancy between the total number of particles we detected and the number predicted by theory! We then all returned to the conference room, and uploaded our results to the control room at the LHC. It was fun comparing our data live with other groups around Europe and discussing the results. Much hilarity greeted the fact that many of the other groups got very different results, and the explanation for that (but what many groups really wanted to know was  whether we got a good look at the eclipse in Ireland). IMG_20150320_154423

Uploading our results via a conference call with the contol room at the LHC, CERN

All in all, a wonderful way for students to get a glimpse of life in the world of the LHC, to meet active particle physics researchers, and to link up with students from other countries. See here for the day’s program.


by cormac at March 21, 2015 07:27 PM

John Baez - Azimuth

Thermodynamics with Continuous Information Flow

guest post by Blake S. Pollard

Over a century ago James Clerk Maxwell created a thought experiment that has helped shape our understanding of the Second Law of Thermodynamics: the law that says entropy can never decrease.

Maxwell’s proposed experiment was simple. Suppose you had a box filled with an ideal gas at equilibrium at some temperature. You stick in an insulating partition, splitting the box into two halves. These two halves are isolated from one another except for one important caveat: somewhere along the partition resides a being capable of opening and closing a door, allowing gas particles to flow between the two halves. This being is also capable of observing the velocities of individual gas particles. Every time a particularly fast molecule is headed towards the door the being opens it, letting fly into the other half of the box. When a slow particle heads towards the door the being keeps it closed. After some time, fast molecules would build up on one side of the box, meaning half the box would heat up! To an observer it would seem like the box, originally at a uniform temperature, would for some reason start splitting up into a hot half and a cold half. This seems to violate the Second Law (as well as all our experience with boxes of gas).

Of course, this apparent violation probably has something to do with positing the existence of intelligent microscopic doormen. This being, and the thought experiment itself, are typically referred to as Maxwell’s demon.

Demon2
Photo credit: Peter MacDonald, Edmonds, UK

When people cook up situations that seem to violate the Second Law there is typically a simple resolution: you have to consider the whole system! In the case of Maxwell’s demon, while the entropy of the box decreases, the entropy of the system as a whole, demon include, goes up. Precisely quantifying how Maxwell’s demon doesn’t violate the Second Law has led people to a better understanding of the role of information in thermodynamics.

At the American Physical Society March Meeting in San Antonio, Texas, I had the pleasure of hearing some great talks on entropy, information, and the Second Law. Jordan Horowitz, a postdoc at Boston University, gave a talk on his work with Massimiliano Esposito, a researcher at the University of Luxembourg, on how one can understand situations like Maxwell’s demon (and a whole lot more) by analyzing the flow of information between subsystems.

Consider a system made up of two parts, X and Y. Each subsystem has a discrete set of states. Each systems makes transitions among these discrete states. These dynamics can be modeled as Markov processes. They are interested in modeling the thermodynamics of information flow between subsystems. To this end they consider a bipartite system, meaning that either X transitions or Y transitions, never both at the same time. The probability distribution p(x,y) of the whole system evolves according to the master equation:

\displaystyle{ \frac{dp(x,y)}{dt} = \sum_{x', y'} H_{x,x'}^{y,y'}p(x',y') - H_{x',x}^{y',y}p(x,y) }

where H_{x,x'}^{y,y'} is the rate at which the system transitions from (x',y') \to (x,y). The ‘bipartite’ condition means that H has the form

H_{x,x'}^{y,y'} = \left\{ \begin{array}{cc} H_{x,x'}^y & x \neq x'; y=y' \\   H_x^{y,y'} & x=x'; y \neq y' \\  0 & \text{otherwise.} \end{array} \right.

The joint system is an open system that satisfies the second law of thermodynamics:

\displaystyle{ \frac{dS_i}{dt} = \frac{dS_{XY}}{dt} + \frac{dS_e}{dt} \geq 0 }

where

\displaystyle{ S_{XY} = - \sum_{x,y} p(x,y) \ln ( p(x,y) ) }

is the Shannon entropy of the system, satisfying

\displaystyle{ \frac{dS_{XY} }{dt} = \sum_{x,y} \left[ H_{x,x'}^{y,y'}p(x',y') - H_{x',x}^{y',y}   p(x,y) \right] \ln \left( \frac{p(x',y')}{p(x,y)} \right) }

and

\displaystyle{ \frac{dS_e}{dt}  = \sum_{x,y} \left[ H_{x,x'}^{y,y'}p(x',y') - H_{x',x}^{y',y} p(x,y) \right] \ln \left( \frac{ H_{x,x'}^{y,y'} } {H_{x',x}^{y',y} } \right) }

is the entropy change of the environment.

We want to investigate how the entropy production of the whole system relates to entropy production in the bipartite pieces X and Y. To this end they define a new flow, the information flow, as the time rate of change of the mutual information

\displaystyle{ I = \sum_{x,y} p(x,y) \ln \left( \frac{p(x,y)}{p(x)p(y)} \right) }

Its time derivative can be split up as

\displaystyle{ \frac{dI}{dt} = \frac{dI^X}{dt} + \frac{dI^Y}{dt}}

where

\displaystyle{ \frac{dI^X}{dt} = \sum_{x,y} \left[ H_{x,x'}^{y} p(x',y) - H_{x',x}^{y}p(x,y) \right] \ln \left( \frac{ p(y|x) }{p(y|x')} \right) }

and

\displaystyle{ \frac{dI^Y}{dt} = \sum_{x,y} \left[ H_{x}^{y,y'}p(x,y') - H_{x}^{y',y}p(x,y) \right] \ln \left( \frac{p(x|y)}{p(x|y')} \right) }

are the information flows associated with the subsystems X and Y respectively.

When

\displaystyle{ \frac{dI^X}{dt} > 0}

a transition in X increases the mutual information I, meaning that X ‘knows’ more about Y and vice versa.

We can rewrite the entropy production entering into the second law in terms of these information flows as

\displaystyle{ \frac{dS_i}{dt} = \frac{dS_i^X}{dt} + \frac{dS_i^Y}{dt} }

where

\displaystyle{ \frac{dS_i^X}{dt} = \sum_{x,y} \left[ H_{x,x'}^y p(x',y) - H_{x',x}^y p(x,y) \right] \ln \left( \frac{H_{x,x'}^y p(x',y) } {H_{x',x}^y p(x,y) } \right) \geq 0 }

and similarly for \frac{dS_Y}{dt} . This gives the following decomposition of entropy production in each subsystem:

\displaystyle{ \frac{dS_i^X}{dt} = \frac{dS^X}{dt} + \frac{dS^X_e}{dt} - \frac{dI^X}{dt} \geq 0 }

\displaystyle{ \frac{dS_i^Y}{dt} = \frac{dS^Y}{dt} + \frac{dS^X_e}{dt} - \frac{dI^Y}{dt} \geq 0},

where the inequalities hold for each subsystem. To see this, if you write out the left hand side of each inequality you will find that they are both of the form

\displaystyle{ \sum_{x,y} \left[ x-y \right] \ln \left( \frac{x}{y} \right) }

which is non-negative for x,y \geq 0.

The interaction between the subsystems is contained entirely in the information flow terms. Neglecting these terms gives rise to situations like Maxwell’s demon where a subsystem seems to violate the second law.

Lots of Markov processes have boring equilibria \frac{dp}{dt} = 0 where there is no net flow among the states. Markov processes also admit non-equilibrium steady states, where there may be some constant flow of information. In this steady state all explicit time derivatives are zero, including the net information flow:

\displaystyle{ \frac{dI}{dt} = 0 }

which implies that \frac{dI^X}{dt} = - \frac{dI^Y}{dt}. In this situation the above inequalities become

\displaystyle{ \frac{dS^X_i}{dt} = \frac{dS_e^X}{dt} - \frac{dI^X}{dt} }

and

\displaystyle{ \frac{dS^Y_i}{dt} = \frac{dS_e^X}{dt} + \frac{dI^X}{dt} }.

If

\displaystyle{ \frac{dI^X}{dt} > 0 }

then X is learning something about Y or acting as a sensor. The first inequality

\frac{dS_e^X}{dt} \geq \frac{dI^X}{dt} quantifies the minimum amount of energy X must supply to do this sensing. Similarly -\frac{dS_e^Y}{dt} \leq \frac{dI^X}{dt} bounds the amount of useful energy is available to Y as a result of this information transfer.

In their paper Horowitz and Esposito explore a few other examples and show the utility of this simple breakup of a system into two interacting subsystems in explaining various interesting situations in which the flow of information has thermodynamic significance.

For the whole story, read their paper!

• Jordan Horowitz and Massimiliano Esposito, Thermodynamics with continuous information flow, Phys. Rev. X 4 (2014), 031015.


by John Baez at March 21, 2015 01:00 AM

Geraint Lewis - Cosmic Horizons

Moving Charges and Magnetic Fields
Still struggling with grant writing season, so another post which has resulted in my random musings about the Universe (which actually happens quite a lot).

In second semester, I am teaching electricity and magnetism to our First Year Advanced Class. I really enjoy teaching this class as the kids are on the ball and can ask some deep and meaningful questions.

But the course is not ideal. Why? Because we teach from a textbook and the problem is that virtually all modern text books are almost the same. Science is trotted out in an almost historical progression. But it does not have to be taught that way.

In fact, it would be great if we could start with Hamiltonian and Lagrangian approaches, and derive physics from a top down approach. We're told that it's mathematically too challenging, but it really isn't. In fact, I would start with a book like The Theoretical Minimum, not some multicoloured compendium of physics.

We have to work with what we have!

One of the key concepts that we have to get across is that electricity and magnetism are not really two separate things, but are actually two sides of the same coin. And, in the world of classical physics, it was the outstanding work of James Clerk Maxwell who provided the mathematical framework that broad them together. Maxwell gave us his famous equations that underpin electro-magnetism.
Again, being the advanced class, we can go beyond this and look at the work that came after Maxwell, and that was the work by Albert Einstein, especially Special Theory of Relativity.

The wonderful thing about special relativity is that the mix of electric and magnetic fields depends upon the motion of an observer. One person sees a particular configuration of electric and magnetic fields, and another observer, moving relative to the first, will see a different mix of electric and magnetic fields.

This is nice to say, but what does it actually mean? Can we do anything with it to help understand electricity and magnetism a little more? I think so.

In this course (and EM courses in general) we spend a lot of time calculating the electric field of a static charge distribution. For this, we use the rather marvellous Gauss's law, that relates the electric field distribution to the underlying charges.
I've written about this wonderful law before, and should how you can use symmetries (i.e. nice simple shapes like spheres, boxes and cylinders) to calculate the electric field.

Then we come to the sources of magnetic field. And things, well, get messy. There are some rules we can use, but it's, well, as I said, messy.

We know that magnetic fields are due to moving charges, but what's the magnetic field of a lonely little charge moving on its own? Looks something like this
Where does this come from? And how do you calculate it? Is there an easier way?

And the answer is yes! The kids have done a touch of special relativity at high school and (without really knowing it in detail) have seen the Lorentz transformations. Now, introductory lessons on special relativity often harp on about swimming back and forth across rivers, or something like that, and have a merry dance before getting to the point. And the transforms are presented as a way to map coordinators from one observer to another, but they are much more powerful than that.

You can use them to transform vectors from one observers viewpoint to another. Including electric and magnetic fields. And these are simple algebra.

where we also have the famous Lorentz factor. So, what does this set of equations tell us? Well, if we have an observer who sees a particular electric field (Ex,Ey,Ez), and magnetic field (Bx,By,Bz), then an observer moving with a velocity v (in the x-direction) with see the electric and magnetic fields with the primed components.

Now, we know that the electric field of an isolated charge at rest is. We can use Gauss's law and it tells us that the field is spherically symmetrical and looks like this
The field drops off in strength with the square of the distance. What would be the electric and magnetic fields if this charge was trundling past us at a velocity v? Easy, we just use the Lorentz transforms to tell us. We know exactly what the electric field looks like of the charge at rest, and we know that, at rest, there is no magnetic field.

Being as lazy as I am, I didn't want to calculate anything by hand, so I chucked it into MATLAB, a mathematical environment that many students have access too. I'm not going to be an apologist for MATLAB's default graphics style (which I think sucks - but there are, with a bit of work, solutions).



Anyway, here's a charge at rest. The blue arrows are the electric field. No magnetic field, remember!
So, top left is a view along the x-axis, then y, then z, then a 3-D view. Cool!

Now, what does this charge look like if it is moving relative to me? Throw it into the Lorentz transforms, and voila!


MAGNETIC FIELDS!!! The charge is moving along the x-axis with respect to me, and when we look along x we can see that the magnetic fields wrap around the direction of motion (remember your right hand grip rule kids!).

That was for a velocity of 10% the speed of light. Let's what it up to 99.999%
The electric field gets distorted also!

Students also use Gauss's law to calculate the electric field of an infinitely long line of charge. Now the strength of the field drops off as the inverse of the distance from the line of charge.


Now, let's consider an observer moving at a velocity relative to the line of charge.
Excellent! Similar to what we saw before, and what we would expect. The magnetic field curls around the moving line of charge (which, of course, is simply an electric current).

Didn't we know that, you say? Yes, but I think this is more powerful, not only to reveal the relativistic relationship between the electric and magnetic fields, but also once you have written the few lines of algebraic code in MATLAB (or python or whatever the kids are using these days) you can ask about more complicated situations. You can play with physics (which, IMHO, is how you really understand it).

So, to round off, what's the magnetic field of a perpendicular infinite line of charge moving with respect to you. I am sure you could, with a bit of work, calculate it with usual mathematical approaches, but let's just take a look.

Here's at rest
A bit like further up, but now pointing along a different axis.

Before we add velocity, you physicists and budding physicists make a prediction! Here goes! A tenth the velocity of light and we get
I dunno if we were expecting that! Remember, top left is looking along the x-axis, along the direction of motion. So we have created some magnetic structure. Just not the simple structure we normally see!

And now at 99.99% we get
And, of course, I could play with lots of other geometries, like what happens if you move a ring of charge etc. But let's not get too excited, and come back to that another day.

by Cusp (noreply@blogger.com) at March 21, 2015 12:11 AM

March 20, 2015

Lubos Motl - string vacua and pheno

LHC insists on a near-discovery muon penguin excess
None of the seemingly strong anomalies reported by the LHCb collaboration has been recognized as a survivor but many people believe that similar events are not being overlooked by TRF and they rely on this blog as a source, so I must give you a short report about a new bold announcement by LHCb.
20 March 2015: \(B^0\to K^*\mu^+\mu^-\): new analysis confirms old puzzle (LHCb CERN website)
In July 2013, TRF readers were told about the 3.7 excess in these muon decays of B-mesons.

The complete 2011-2012 data, which was just 3 inverse femtobarns because we talk about LHCb (perhaps I should remind you that it is a "cheaper" LHC detector that focuses on bottom quarks and therefore on CP-violation and flavor violation), have been analyzed. The absolute strength of the signal has decreased but so did the noise so the significance level remained at 3.7 sigma!




The Quanta Magazine quickly wrote a story with an optimistic title
‘Penguin’ Anomaly Hints at Missing Particles
where a picture of a penguin seems to play a central role.




Why are we talking about these Antarctic birds here? It's because they are actually Feynman diagrams.



The Standard Model calculates the probability of the decay of the B-mesons to the muon pairs via a one-loop diagram – which is just the skeleton of the picture above – and this diagram has been called "penguin" by particle physicists who didn't see that it was really a female with big breasts and a very thin waistline.

But there may have been more legitimate reasons for the "penguin" terminology – for example, because it sounds more concise than a "Dolly Buster diagram", for example. ;-)

The point is that there are particular particles running in the internal lines of the diagram according to the Standard Model and an excess of these decays would probably be generated by a diagram of the same "penguin" topology but with new particle species used for the internal lines. Those hypothetical beasts are indicated by the question marks on the penguin picture.

Adam Falkowski at Resonaances adds some skeptical words about this deviation. He thinks that what the Standard Model predicts is highly uncertain so there is no good reason to conclude that it must be new physics even though he thinks that the it's become very unlikely that it's just noise.

Perhaps more interestingly, the Quanta Magazine got an answer from Nima who talked about his heart broken by the LHCb numerous times in the past.

Various papers have proposed partially satisfactory models attempting to explain the anomaly. For example, two months ago, I described a two-Higgs model with a gauged lepton-mu-minus-tau number which claims to explain this anomaly along with two others.

Gordon Kane discussed muon decays of B-mesons in his guest blog in late 2012, before similar anomalies became widely discussed by the experimenters, and he sketched his superstring explanation for these observations.

LHCb is a role model for an experiment that "may see an anomaly" but "doesn't really tell us who is the culprit" – the same unsatisfactory semi-answer that you may get from high-precision colliders etc. That's why the brute force and high energy – along with omnipotent detectors such as ATLAS and CMS – seem to be so clearly superior at the end. The LHCb is unlikely to make us certain that it is seeing something new – even if it surpasses 5 sigma – because even if it does see something, it doesn't tell us sufficiently many details for the "story about the new discovery" to make sense.

But it's plausible that these observations will be very useful when a picture of new physics starts to emerge thanks to the major experiments...

The acronym LHCb appears in 27 TRF blog posts.

by Luboš Motl (noreply@blogger.com) at March 20, 2015 05:12 PM

astrobites - astro-ph reader's digest

Wreaking Havoc with a Stellar Fly-By

Words

Illustration of the RW Aurigae system containing two stars (dubbed A and B) and a disk around Star A. The schematic labels the angles needed to define the system’s geometry and angular momenta. Fig. 1 from the paper.

Physical models in astronomy are generally as simple as possible. On the one hand, you don’t want to oversimplify reality. On the other hand, you don’t want to throw in more parameters than could ever be constrained from observations. But some cases deviate just enough from a basic textbook case to be really interesting, like the subject of today’s paper: a pair of stars called RW Aurigae that features a long arm of gas and dust wrapped around one star.

You can’t model RW Aurigae as a single star with a disk of material around it, because there is a second star. And you can’t model it as a regular old binary system either, because there are interactions between the stars and the asymmetric circumstellar disk. The authors of today’s paper create a comprehensive smooth particle hydrodynamic model that considers many different observations of RW Aurigae. They consider the system’s shape, size, motion, composition, and geometry, and they conduct simulated observations of the model for comparison with real observations.

A tidal encounter

words

Simulated particles in motion in RW Aurigae. This view of the smooth particle hydrodynamic model has each particle color-coded by its velocity toward or away from the observer (the color bar is in km/s). Star A is the one in front with a large tidally disrupted disk. Fig. 2 from the paper.

The best-fitting model of RW Aurigae matches observations of many different aspects as observed today, including particle motions. Because the model is like a movie you can play backward or forward in time, the authors are able to show that the current long arm of gas most likely came from a tidal disruption. This was long suspected to be the case, based on appearance alone, but this paper’s detailed model shows that the physics checks out with our intuition.

What exactly is a tidal disruption? Well, in this case, over the last 600 years or so (a remarkably short time in astronomy!), star B passed close enough to the disk around star A to tear it apart with gravity. Because some parts of the disk were significantly closer to star B than others, they felt different amounts of gravitational force. As time went on, this changed the orbits of individual particles in the disk and caused the overall shape to change. This is the same effect that creates tides on Earth: opposite sides of the Earth are closer or farther from the Moon’s gravity at different times, which causes Earth’s oceans on the far side from the Moon to feel less force than the water closer to it. The figure above shows present-day motions of simulated particles in RW Aurigae that resulted from the tidal encounter. The figure below shows snapshots from a movie of the hydrodynamic model, from about 600 years in the past to today. Instead of representing motion, the brighter regions represent more particles (higher density).

words

Snapshots of the RW Aurigae model over a 600-year time span as viewed from Earth. From top left to right bottom the times are -589, -398, -207, and 0 years from today, respectively. Brighter colors indicates higher density. There is a stream of material linking stars A and B in the bottom right panel, but it is not visible here due to the density contrast chosen. Fig. 4 from the paper.

Simulating observations

Observational astronomers are always constrained to viewing the cosmos from one angle. We don’t get to choose how systems like RW Aurigae are oriented on the sky. But models let us change our viewing angle and better understand the full 3D physical picture. The figure below shows the disk around star A if we could view it from above in the present. As before, brighter areas have more particles. Simulated observations, such as measuring the size of the disk in the figure below, agree well with actual observations of RW Aurigae.

dai_fig9

Top-down view of the present-day disk in star A from the RW Aurigae model. The size of the model disk agrees with estimates from observations, and the disk has clearly become eccentric after its tidal encounter with star B. Fig. 9 from the paper.

The final mystery the authors of today’s paper explore is a dimming that happened during observations of RW Aurigae in 2010/2011. The model suggests this dimming was likely caused by the stream of material between stars A and B passing in front of star A from our line of sight. However, since the disk and related material are clumpy and still changing shape, they make no predictions about specific future dimming events. Interestingly, another recent astro-ph paper by Petrov et al. report another deep dimming in 2014. They suggest it may arise from dust grains close to star A being “stirred up” from strong stellar winds and moving into our line of sight.

Combining models and observations like today’s paper does is an incredibly useful technique to learn about how structures of all sizes form in the Universe. Tides affect everything from Earth, to stars, to galaxies. This is one of the first cases we’ve seen of a protoplanetary disk having a tidal encounter. The Universe is a messy place, and understanding dynamic interactions like RW Aurigae’s is an important step toward a clearer picture of how stars, planets, and galaxies form and evolve.

by Meredith Rawls at March 20, 2015 04:24 PM

Sean Carroll - Preposterous Universe

Guest Post: Don Page on God and Cosmology

Don Page is one of the world’s leading experts on theoretical gravitational physics and cosmology, as well as a previous guest-blogger around these parts. (There are more world experts in theoretical physics than there are people who have guest-blogged for me, so the latter category is arguably a greater honor.) He is also, somewhat unusually among cosmologists, an Evangelical Christian, and interested in the relationship between cosmology and religious belief.

Longtime readers may have noticed that I’m not very religious myself. But I’m always willing to engage with people with whom I disagree, if the conversation is substantive and proceeds in good faith. I may disagree with Don, but I’m always interested in what he has to say.

Recently Don watched the debate I had with William Lane Craig on “God and Cosmology.” I think these remarks from a devoted Christian who understands the cosmology very well will be of interest to people on either side of the debate.


Open letter to Sean Carroll and William Lane Craig:

I just ran across your debate at the 2014 Greer-Heard Forum, and greatly enjoyed listening to it. Since my own views are often a combination of one or the others of yours (though they also often differ from both of yours), I thought I would give some comments.

I tend to be skeptical of philosophical arguments for the existence of God, since I do not believe there are any that start with assumptions universally accepted. My own attempt at what I call the Optimal Argument for God (one, two, three, four), certainly makes assumptions that only a small fraction of people, and perhaps even only a small fraction of theists, believe in, such as my assumption that the world is the best possible. You know that well, Sean, from my provocative seminar at Caltech in November on “Cosmological Ontology and Epistemology” that included this argument at the end.

I mainly think philosophical arguments might be useful for motivating someone to think about theism in a new way and perhaps raise the prior probability someone might assign to theism. I do think that if one assigns theism not too low a prior probability, the historical evidence for the life, teachings, death, and resurrection of Jesus can lead to a posterior probability for theism (and for Jesus being the Son of God) being quite high. But if one thinks a priori that theism is extremely improbable, then the historical evidence for the Resurrection would be discounted and not lead to a high posterior probability for theism.

I tend to favor a Bayesian approach in which one assigns prior probabilities based on simplicity and then weights these by the likelihoods (the probabilities that different theories assign to our observations) to get, when the product is normalized by dividing by the sum of the products for all theories, the posterior probabilities for the theories. Of course, this is an idealized approach, since we don’t yet have _any_ plausible complete theory for the universe to calculate the conditional probability, given the theory, of any realistic observation.

For me, when I consider evidence from cosmology and physics, I find it remarkable that it seems consistent with all we know that the ultimate theory might be extremely simple and yet lead to sentient experiences such as ours. A Bayesian analysis with Occam’s razor to assign simpler theories higher prior probabilities would favor simpler theories, but the observations we do make preclude the simplest possible theories (such as the theory that nothing concrete exists, or the theory that all logically possible sentient experiences occur with equal probability, which would presumably make ours have zero probability in this theory if there are indeed an infinite number of logically possible sentient experiences). So it seems mysterious why the best theory of the universe (which we don’t have yet) may be extremely simple but yet not maximally simple. I don’t see that naturalism would explain this, though it could well accept it as a brute fact.

One might think that adding the hypothesis that the world (all that exists) includes God would make the theory for the entire world more complex, but it is not obvious that is the case, since it might be that God is even simpler than the universe, so that one would get a simpler explanation starting with God than starting with just the universe. But I agree with your point, Sean, that theism is not very well defined, since for a complete theory of a world that includes God, one would need to specify the nature of God.

For example, I have postulated that God loves mathematical elegance, as well as loving to create sentient beings, so something like this might explain both why the laws of physics, and the quantum state of the universe, and the rules for getting from those to the probabilities of observations, seem much simpler than they might have been, and why there are sentient experiences with a rather high degree of order. However, I admit there is a lot of logically possible variation on what God’s nature could be, so that it seems to me that at least we humans have to take that nature as a brute fact, analogous to the way naturalists would have to take the laws of physics and other aspects of the natural universe as brute facts. I don’t think either theism or naturalism solves this problem, so it seems to me rather a matter of faith which makes more progress toward solving it. That is, theism per se cannot deduce from purely a priori reasoning the full nature of God (e.g., when would He prefer to maintain elegant laws of physics, and when would He prefer to cure someone from cancer in a truly miraculous way that changes the laws of physics), and naturalism per se cannot deduce from purely a priori reasoning the full nature of the universe (e.g., what are the dynamical laws of physics, what are the boundary conditions, what are the rules for getting probabilities, etc.).

In view of these beliefs of mine, I am not convinced that most philosophical arguments for the existence of God are very persuasive. In particular, I am highly skeptical of the Kalam Cosmological Argument, which I shall quote here from one of your slides, Bill:

  1. If the universe began to exist, then there is a transcendent cause
    which brought the universe into existence.
  2. The universe began to exist.
  3. Therefore, there is a transcendent cause which brought the
    universe into existence.

I do not believe that the first premise is metaphysically necessary, and I am also not at all sure that our universe had a beginning. (I do believe that the first premise is true in the actual world, since I do believe that God exists as a transcendent cause which brought the universe into existence, but I do not see that this premise is true in all logically possible worlds.)

I agree with you, Sean, that we learn our ideas of causation from the lawfulness of nature and from the directionality of the second law of thermodynamics that lead to the commonsense view that causes precede their effects (or occur at the same time, if Bill insists). But then we have learned that the laws of physics are CPT invariant (essentially the same in each direction of time), so in a fundamental sense the future determines the past just as much as the past determines the future. I agree that just from our experience of the one-way causation we observe within the universe, which is just a merely effective description and not fundamental, we cannot logically derive the conclusion that the entire universe has a cause, since the effective unidirectional causation we commonly experience is something just within the universe and need not be extrapolated to a putative cause for the universe as a whole.

However, since to me the totality of data, including the historical evidence for the Resurrection of Jesus, is most simply explained by postulating that there is a God who is the Creator of the universe, I do believe by faith that God is indeed the cause of the universe (and indeed the ultimate Cause and Determiner of everything concrete, that is, everything not logically necessary, other than Himself—and I do believe, like Richard Swinburne, that God is concrete and not logically necessary, the ultimate brute fact). I have a hunch that God created a universe with apparent unidirectional causation in order to give His creatures some dim picture of the true causation that He has in relation to the universe He has created. But I do not see any metaphysical necessity in this.

(I have a similar hunch that God created us with the illusion of libertarian free will as a picture of the true freedom that He has, though it might be that if God does only what is best and if there is a unique best, one could object that even God does not have libertarian free will, but in any case I would believe that it would be better for God to do what is best than to have any putative libertarian free will, for which I see little value. Yet another hunch I have is that it is actually sentient experiences rather than created individual `persons’ that are fundamental, but God created our experiences to include beliefs that we are individual persons to give us a dim image of Him as the one true Person, or Persons in the Trinitarian perspective. However, this would take us too far afield from my points here.)

On the issue of whether our universe had a beginning, besides not believing that this is at all relevant to the issue of whether or not God exists, I agreed almost entirely with Sean’s points rather than yours, Bill, on this issue. We simply do not know whether or not our universe had a beginning, but there are certainly models, such as Sean’s with Jennifer Chen (hep-th/0410270 and gr-qc/0505037), that do not have a beginning. I myself have also favored a bounce model in which there is something like a quantum superposition of semiclassical spacetimes (though I don’t really think quantum theory gives probabilities for histories, just for sentient experiences), in most of which the universe contracts from past infinite time and then has a bounce to expand forever. In as much as these spacetimes are approximately classical throughout, there is a time in each that goes from minus infinity to plus infinity.

In this model, as in Sean’s, the coarse-grained entropy has a minimum at or near the time when the spatial volume is minimized (at the bounce), so that entropy increases in both directions away from the bounce. At times well away from the bounce, there is a strong arrow of time, so that in those regions if one defines the direction of time as the direction in which entropy increases, it is rather as if there are two expanding universes both coming out from the bounce. But it is erroneous to say that the bounce is a true beginning of time, since the structure of spacetime there (at least if there is an approximately classical spacetime there) has timelike curves going from a proper time of minus infinity through the bounce (say at proper time zero) and then to proper time of plus infinity. That is, there are worldlines that go through the bounce and have no beginning there, so it seems rather artificial to say the universe began at the bounce that is in the middle just because it happens to be when the entropy is minimized. I think Sean made this point very well in the debate.

In other words, in this model there is a time coordinate t on the spacetime (say the proper time t of a suitable collection of worldlines, such as timelike geodesics that are orthogonal to the extremal hypersurface of minimal spatial volume at the bounce, where one sets t = 0) that goes from minus infinity to plus infinity with no beginning (and no end). Well away from the bounce, there is a different thermodynamic time t' (increasing with increasing entropy) that for t >> 0 increases with t but for t << 0 decreases with t (so there t' becomes more positive as t becomes more negative). For example, if one said that t' is only defined for |t| > 1, say, one might have something like

t' = (t^2 - 1)^{1/2},

the positive square root of one less than the square of t. This thermodynamic time t' only has real values when the absolute value of the coordinate time t, that is, |t|, is no smaller than 1, and then t' increases with |t|.

One might say that t' begins (at t' = 0) at t = -1 (for one universe that has t' growing as t decreases from -1 to minus infinity) and at t = +1 (for another universe that has t' growing as t increases from +1 to plus infinity). But since the spacetime exists for all real t, with respect to that time arising from general relativity there is no beginning and no end of this universe.

Bill, I think you also objected to a model like this by saying that it violates the second law (presumably in the sense that the coarse-grained entropy does not increase monotonically with t for all real t). But if we exist for t >> 1 (or for t << -1; there would be no change to the overall behavior if t were replaced with -t, since the laws are CPT invariant), then we would be in a region where the second law is observed to hold, with coarse-grained entropy increasing with t' \sim t (or with t' \sim -t if t << -1). A viable bounce model would have it so that it would be very difficult or impossible for us directly to observe the bounce region where the second law does not apply, so our observations would be in accord with the second law even though it does not apply for the entire universe.

I think I objected to both of your probability estimates for various things regarding fine tuning. Probabilities depend on the theory or model, so without a definite model, one cannot claim that the probability for some feature like fine tuning is small. It was correct to list me among the people believing in fine tuning in the sense that I do believe that there are parameters that naively are far different from what one might expect (such as the cosmological constant), but I agreed with the sentiment of the woman questioner that there are not really probabilities in the absence of a model.

Bill, you referred to using some “non-standard” probabilities, as if there is just one standard. But there isn’t. As Sean noted, there are models giving high probabilities for Boltzmann brain observations (which I think count strongly against such models) and other models giving low probabilities for them (which on this regard fits our ordered observations statistically). We don’t yet know the best model for avoiding Boltzmann brain domination (and, Sean, you know that I am skeptical of your recent ingenious model), though just because I am skeptical of this particular model does not imply that I believe that the problem is insoluble or gives evidence against a multiverse; in any case it seems also to be a problem that needs to be dealt with even in just single-universe models.

Sean, at one point your referred to some naive estimate of the very low probability of the flatness of the universe, but then you said that we now know the probability of flatness is very near unity. This is indeed true, as Stephen Hawking and I showed long ago (“How Probable Is Inflation?” Nuclear Physics B298, 789-809, 1988) when we used the canonical measure for classical universes, but one could get other probabilities by using other measures from other models.

In summary, I think the evidence from fine tuning is ambiguous, since the probabilities depend on the models. Whether or not the universe had a beginning also is ambiguous, and furthermore I don’t see that it has any relevance to the question of whether or not God exists, since the first premise of the Kalam cosmological argument is highly dubious metaphysically, depending on contingent intuitions we have developed from living in a universe with relatively simple laws of physics and with a strong thermodynamic arrow of time.

Nevertheless, in view of all the evidence, including both the elegance of the laws of physics, the existence of orderly sentient experiences, and the historical evidence, I do believe that God exists and think the world is actually simpler if it contains God than it would have been without God. So I do not agree with you, Sean, that naturalism is simpler than theism, though I can appreciate how you might view it that way.

Best wishes,

Don

by Sean Carroll at March 20, 2015 03:17 PM

Symmetrybreaking - Fermilab/SLAC

The LHC does a dry run

Engineers have started the last step required before sending protons all the way around the renovated Large Hadron Collider.

All systems are go! The the Large Hadron Collider’s operations team has started running the accelerator through its normal operational cycle sans particles as a final dress rehearsal before the restart later this month.

“This is where we bring it all together,” says Mike Lamont, the head of CERN’s operations team.

Over the last two years, 400 engineers and technicians worked a total of 1 million hours repairing, upgrading and installing new technology into the LHC. And now, the world’s most powerful particle accelerator is almost ready to start doing its thing.

“During this final checkout, we will be testing all of the LHC’s subsystems to make sure the entire machine is ready,” says Markus Albert, one of the LHC operators responsible for this dry run. “We don’t want any surprises once we start operation with beam.”

Engineers will simulate the complete cycle of injecting, steering, accelerating, squeezing, colliding and finally dumping protons. Then engineers will power down the magnets and start the process all over again.

“Everything will behave exactly as if there is beam,” Albert says. “This way we can validate that these systems will all run together.”

Operators practiced sending beams of protons part of the way around the ring earlier this month.

During this test, engineers will keep a particularly close eye on the LHC’s superconducting magnet circuits, which received major work and upgrades during the shutdown.

“The whole magnet system was taken apart and put back together again, and with upgraded magnet protection systems everything needs to be very carefully checked out,” Lamont says. “In fact, this has been going on for the last six months in the powering tests.”

They will also scrutinize the beam interlock system—the system that triggers the beam dump, which diverts the beam out of the LHC and into a large block of graphite if anything goes wrong.

“There are thousands of inputs that feed into the beam interlock system, and if any of these inputs say something is wrong or they are not happy about the behavior of the beam, the system dumps the beam within three turns of the LHC,” Lamont says. 

During the week of March 23, engineers plan to send a proton beam all the way around the LHC for the first time in over two years. By the end of May, they hope to start high-energy proton-proton collisions.

“Standard operation is providing physics data to the four experiments,” Albert says. “The rest is just preparatory work.”

 

LHC restart timeline

February 2015

The Large Hadron Collider is now cooled to nearly its operational temperature.

Info-Graphic by: Sandbox Studio, Chicago
 

LHC filled with liquid helium

The Large Hadron Collider is now cooled to nearly its operational temperature.
Read more…
A first set of superconducting magnets has passed the test and is ready for the Large Hadron Collider to restart in spring.
Info-Graphic by: Sandbox Studio, Chicago
 

First LHC magnets prepped for restart

A first set of superconducting magnets has passed the test and is ready for the Large Hadron Collider to restart in spring. Read more…
Engineers and technicians have begun to close experiments in preparation for the next run.
Info-Graphic by: Sandbox Studio, Chicago
 

LHC experiments prep for restart

Engineers and technicians have begun to close experiments in preparation for the next run.
Read more…

 

Like what you see? Sign up for a free subscription to symmetry!

 

by Sarah Charley at March 20, 2015 02:50 PM

Quantum Diaries

Expanding the cosmic search

This article appeared in Fermilab Today on March 20, 2015.

The South Pole Telescope scans the skies during a South Pole winter. Photo: Jason Gallicchio, University of Chicago

The South Pole Telescope scans the skies during a South Pole winter. Photo: Jason Gallicchio, University of Chicago

Down at the South Pole, where temperatures drop below negative 100 degrees Fahrenheit and darkness blankets the land for six months at a time, the South Pole Telescope (SPT) searches the skies for answers to the mysteries of our universe.

This mighty scavenger is about to get a major upgrade — a new camera that will help scientists further understand neutrinos, the ghost-like particles without electric charge that rarely interact with matter.

The 10-meter SPT is the largest telescope ever to make its way to the South Pole. It stands atop a two-mile thick plateau of ice, mapping the cosmic microwave background (CMB), the light left over from the big bang. Astrophysicists use these observations to understand the composition and evolution of the universe, all the way back to the first fraction of a second after the big bang, when scientists believe the universe quickly expanded during a period called inflation.

One of the goals of the SPT is to determine the masses of the neutrinos, which were produced in great abundance soon after the big bang. Though nearly massless, because neutrinos exist in huge numbers, they contribute to the total mass of the universe and affect its expansion. By mapping out the mass density of the universe through measurements of CMB lensing, the bending of light caused by immense objects such as large galaxies, astrophysicists are trying to determine the masses of these elusive particles.

A wafer of detectors for the SPT-3G camera undergoes inspection at Fermilab. Photo: Bradford Benson, University of Chicago

A wafer of detectors for the SPT-3G camera undergoes inspection at Fermilab. Photo: Bradford Benson, University of Chicago

To conduct these extremely precise measurements, scientists are installing a bigger, more sensitive camera on the telescope. This new camera, SPT-3G, will be four times heavier and have a factor of about 10 more detectors than the current camera. Its higher level of sensitivity will allow researchers to make extremely precise measurements of the CMB that will hopefully make it possible to cosmologically detect neutrino mass.

This photo shows an up-close look at a single SPT-3G detector. Photo: Volodymyr Yefremenko, Argonne National Laboratory

This photo shows an up-close look at a single SPT-3G detector. Photo: Volodymyr Yefremenko, Argonne National Laboratory


“In the next several years, we should be able to get to the sensitivity level where we can measure the number of neutrinos and derive their mass, which will tell us how they contribute to the overall density of the universe,” explained Bradford Benson, the head of the CMB Group at Fermilab. “This measurement will also enable even more sensitive constraints on inflation and has the potential to measure the energy scale of the associated physics that caused it.”

SPT-3G is being completed by a collaboration of scientists spanning the DOE national laboratories, including Fermilab and Argonne, and universities including the University of Chicago and University of California, Berkeley. The national laboratories provide the resources needed for the bigger camera and larger detector array while the universities bring years of expertise in CMB research.

“The national labs are getting involved because we need to scale up our infrastructure to support the big experiments the field needs for the next generation of science goals,” Benson said. Fermilab’s main role is the initial construction and assembly of the camera, as well as its integration with the detectors. This upgrade is being supported mainly by the Department of Energy and the National Science Foundation, which also supports the operations of the experiment at the South Pole.

Once the camera is complete, scientists will bring it to the South Pole, where conditions are optimal for these experiments. The extreme cold prevents the air from holding much water vapor, which can absorb microwave signals, and the sun, another source of microwaves, does not rise between March and September.

The South Pole is accessible only for about three months during the year, starting in November. This fall, about 20 to 30 scientists will head down to the South Pole to assemble the camera on the telescope and make sure everything works before leaving in mid-February. Once installed, scientists will use it to observe the sky over four years.

“For every project I’ve worked on, it’s that beginning — when everyone is so excited not knowing what we’re going to find, then seeing things you’ve been dreaming about start to show up on the computer screen in front of you — that I find really exciting,” said University of Chicago’s John Carlstrom, the principal investigator for the SPT-3G project.

Diana Kwon

by Fermilab at March 20, 2015 02:00 PM

Jester - Resonaances

LHCb: B-meson anomaly persists
Today LHCb released a new analysis of the angular distribution in  the B0 → K*0(892) (→K+π-) μ+ μ- decays. In this 4-body decay process, the angles between the direction of flight of all the different particles can be measured as a function of the invariant mass  q^2 of the di-muon pair. The results are summarized in terms of several form factors with imaginative names like P5', FL, etc. The interest in this particular decay comes from the fact that 2 years ago LHCb reported a large deviation from the standard model prediction in one q^2 region of 1 form factor called P5'. That measurement was based on 1 inverse femtobarn of data;  today it was updated to full 3 fb-1 of run-1 data. The news is that the anomaly persists in the q^2 region 4-8 GeV, see the plot.  The measurement  moved a bit toward the standard model, but the statistical errors have shrunk as well.  All in all, the significance of the anomaly is quoted as 3.7 sigma, the same as in the previous LHCb analysis. New physics that effectively induces new contributions to the 4-fermion operator (\bar b_L \gamma_\rho s_L) (\bar \mu \gamma_\rho \mu) can significantly improve agreement with the data, see the blue line in the plot. The preference for new physics remains remains high, at the 4 sigma level, when this measurement is combined with other B-meson observables.

So how excited should we be? One thing we learned today is that the anomaly is unlikely to be a statistical fluctuation. However, the observable is not of the clean kind, as the measured angular distributions are  susceptible to poorly known QCD effects. The significance depends a lot on what is assumed about these uncertainties, and experts wage ferocious battles about the numbers. See for example this paper where larger uncertainties are advocated, in which case the significance becomes negligible. Therefore, the deviation from the standard model is not yet convincing at this point. Other observables may tip the scale.  If a  consistent pattern of deviations in several B-physics observables emerges,  only then we can trumpet victory.


Plots borrowed from David Straub's talk in Moriond; see also the talk of Joaquim Matias with similar conclusions. David has a post with more details about the process and uncertainties. For a more popular write-up, see this article on Quanta Magazine. 

by Jester (noreply@blogger.com) at March 20, 2015 01:56 PM

Clifford V. Johnson - Asymptotia

Festival of Books!
what_are_you_reading(Click for larger view of 2010 Festival "What are you reading?" wall.) So the Festival of Books is 18-19th April this year. If you're in or near LA, I hope you're going! It's free, it's huge (the largest book festival in the USA) and also huge fun! They've announced the schedule of events and the dates on which you can snag (free) tickets for various indoor panels and appearances since they are very popular, as usual. So check out the panels, appearances, and performances here. (Check out several of my past posts on the Festival here. Note also that the festival is on the USC campus which is easy to get to using great public transport links if you don't want to deal with traffic and parking.) Note also that the shortlist for the 2014 LA Times Book Prizes was announced (a while back - I forgot to post about it) and it is here. I always find it interesting... for a start, it is a great list of reading suggestions! By the way, apparently I'm officially an author - not just a guy who writes from time to time - an author. Why? Well, I'm listed as one on the schedule site. I'll be on one of the author panels! It is moderated by KC Cole, and I'll be joining [...] Click to continue reading this post

by Clifford at March 20, 2015 01:56 PM

Lubos Motl - string vacua and pheno

A neat story on SUSY in Business Insider
A large percentage of the people and the "mainstream media" have all kinds of crazy opinions, often combined with downright hostility towards science in general and modern theoretical physics in particular. Supersymmetry has been a frequent target of numerous outlets in recent years.

But the 2015 LHC run is getting started and it's an exciting time – surely not a good time for bitterness – and one sometimes finds newspapers that turn out to be great surprises. Today, I need to celebrate the story
Here's how proving supersymmetry could completely change how we understand the universe
by Kelly Dickerson in the Business Insider. She explains why the Standard Model seems to be an incomplete theory and how SUSY helps with the fine-tuning of the Higgs mass; gives a dark matter candidate; and moves a step closer to experimentally establishing string theory, a theory of everything.

It also honestly states that there's no direct experimental sign of SUSY yet, and that the LHC may change this situation soon or not.




I am happy about the Business Insider for another reason. Yesterday, it was one of the first news outlets that wrote at least a balanced story about Varoufakis' finger.

During a 2013 communist conference in Croatia, the current Greek finance minister showed an obscene gesture as he was saying that "Greece should stick the finger to Germany". It was a bit controversial and Germans got excited about it – unlike the Czech politicians, German politicians probably don't use this gesture on a daily basis. ;-)




A few days ago, a satirical program on German TV claimed that it had "doctored" the video, and showed Star-Trek-like green men who were apparently used to create the fake scenes. Jan Böhmermann, a comedian and the host, showed the "obscene version" of the Varoufakis 2013 video around 1:30, and the "polite version" around 2:10.

Now, an impartial, independently thinking person must ask: which version is actually right and which is fake? That's how Business Insider approached it, too. The answer is, of course, that the "obscene version" is legitimate while the "polite version" is fake, and the program claiming that the "obscene video" had been doctored was a hoax – a "fake fake", if you wish. Mr Böhmermann has faked the doctoring, as thelocal.de puts it.

Mr Böhmermann tries to be funny and ambiguous about it – while promoting himself. Later, he said that it was a "fake fake fake fake". Well, as long as the number of "fakes" in the phrase is even, it's still true that the finger was actually there.

There are dozens of ways to see that this is almost certainly the right answer. But the number of news outlets that were able to see through this – not too complicated – fog was very limited. Today, The New York Times were also able to notice that "what is true" and "what is fake" are exactly interchanged relatively to what most news outlets try to claim (and what their gullible readers think).

You may see that the "video with the finger" was also posted on the SkriptaTV YouTube channel (40:31 is the gesture) that belongs to the organizers of the 2013 "subversive" festival of Marxists. There was no reason 1 month ago why they should have used a version of the seriously meant 1-hour video doctored by some German TV folks. Also, the "polite version" is much less rhythmical and natural than the "obscene version". Also, the polite version displays much less motion of the hand during the critical moments – and it's easier to fake the hand motion if it is not moving much. There are other arguments, I don't want to spend an hour with that.

But most people are apparently unable to comprehend the concept of "fake fake things". More generally, they are unable to see that the people who criticize something may be worth criticism themselves. An overwhelming majority of the people are morons who immediately endorse everyone who criticizes, and so on. Thank God, they're not a majority in The New York Times and the Business Insider. People claiming to have doctored something may be joking, too.



But back to SUSY

What I particularly liked about the positive story on SUSY in the Business Insider were the helpful votes in the discussion under the article. One or two cranks offered his or their cheap anti-SUSY and anti-string-theory slogans. They were immediately replied to by others – and according to the votes, the Shmoit-like trolls were voted down approximately by a 10-1 ratio.

The Shmoit-like trolls have conquered most of the cesspools in the world but there are places in the world that are not cesspools. ;-)



By the way, would some of you agree with me that Milan Šteindler, this TV "scientist" who promotes a car leasing company in the commercial above, is similar to Don Lincoln of Fermilab?

Two decades ago, Šteindler would star as the "more German" teacher of German in the bogus TV course of the German language "Alles Gute" (a part of the show "Czech Soda"). For example, check this advertisement for Škoda Henlein, a variation of Škoda Greenline with Zyklon B – typical strong-coffee Czech black humor. (Henlein was the pro-Nazi leader of the minority ethnic Germans in the Sudetenland before Czechoslovakia lost the territory in 1938.)

by Luboš Motl (noreply@blogger.com) at March 20, 2015 12:58 PM

Sean Carroll - Preposterous Universe

Auction: Multiply-Signed Copy of Why Evolution Is True

Here is a belated but very welcome spinoff of our Moving Naturalism Forward workshop from 2012: Jerry Coyne was clever enough to bring along a copy of his book, Why Evolution Is True, and have all the participants sign it. He subsequently gathered a few more distinguished autographs, and to make it just a bit more beautiful, artist Kelly Houle added some original illustrations. Jerry is now auctioning off the book to benefit Doctors Without Borders. Check it out:

weit2

weit3

Here is the list of signatories:

  • Dan Barker
  • Sean Carroll
  • Jerry Coyne
  • Richard Dawkins
  • Terrence Deacon
  • Simon DeDeo
  • Daniel Dennett
  • Owen Flanagan
  • Anna Laurie Gaylor
  • Rebecca Goldstein
  • Ben Goren
  • Kelly Houle
  • Lawrence Krauss
  • Janna Levin
  • Jennifer Ouellette
  • Massimo Pigliucci
  • Steven Pinker
  • Carolyn Porco
  • Nicholas Pritzker
  • Alex Rosenberg
  • Don Ross
  • Steven Weinberg

Jerry is hoping it will fetch a good price to benefit the charity, so we’re spreading the word. I notice that a baseball signed by Mickey Mantle goes for about $2000. In my opinion a book signed by Steven Weinberg alone should go for even more, so just imagine what this is worth. You have ten days to get your bids in — and if it’s a bit pricey for you personally, I’m sure there’s someone who loves you enough to buy it for you.

by Sean Carroll at March 20, 2015 01:02 AM

March 19, 2015

Clifford V. Johnson - Asymptotia

LAIH Luncheon with Jack Miles
LAIH_Jack_Miles_6_march_2015_2 (Click for larger view.) On Friday 6th March the Los Angeles Institute for the Humanities (LAIH) was delighted to have our luncheon talk given by LAIH Fellow Jack Miles. He told us some of the story behind (and the making of) the Norton Anthology of World Religions - he is the main editor of this massive work - and lots of the ins and outs of how you go about undertaking such an enterprise. It was fascinating to hear how the various religions were chosen, for example, and how he selected and recruited specialist editors for each of the religions. It was an excellent talk, made all the more enjoyable by having Jack's quiet and [...] Click to continue reading this post

by Clifford at March 19, 2015 09:16 PM

Quantum Diaries

Ramping up to Run 2

When I have taught introductory electricity and magnetism for engineers and physics majors at the University of Nebraska-Lincoln, I have used a textbook by Young and Freedman. (Wow, look at the price of that book! But that’s a topic for another day.) The first page of Chapter 28, “Sources of Magnetic Field,” features this photo:

28_00CO-P

It shows the cryostat that contains the solenoid magnet for the Compact Muon Solenoid experiment. Yes, “solenoid” is part of the experiment’s name, as it is a key element in the design of the detector. There is no other magnet like it in the world. It can produce a 4 Tesla magnetic field, 100,000 times greater than that of the earth. (We actually run at 3.8 Tesla.) Charged particles that move through a magnetic field take curved paths, and the stronger the field, the stronger the curvature. The more the path curves, the more accurately we can measure it, and thus the more accurately we can measure the momentum of the particle.

The magnet is superconducting; it is kept inside a cryostat that is full of liquid helium. With a diameter of seven meters, it is the largest superconducting magnet ever built. When in its superconducting state, the magnet wire carries more than 18,000 amperes of current, and the energy stored is about 2.3 gigajoules, enough energy to melt 18 tons of gold. Should the temperature inadvertently rise and the magnet become normal conducting, all of that energy needs to go somewhere; there are some impressively large copper conduits that can carry the current to the surface and send it safely to ground. (Thanks to the CMS web pages for some of these fun facts.)

With the start of the LHC run just weeks away, CMS has turned the magnet back on by slowly ramping up the current. Here’s what that looked like today:

dbTree_1426788776816

You can see that they took a break for lunch! It is only the second time since the shutdown started two years ago that the magnet has been ramped back up, and now we’re pretty much going to keep it on for at least the rest of the year. From the experiment’s perspective, the long shutdown is now over, and the run is beginning. CMS is now prepared to start recording cosmic rays in this configuration, as a way of exercising the detector and using the observed muon to improve our knowledge of the alignment of detector components. This is a very important milestone for the experiment as we prepare for operating the LHC at the highest collision energies ever achieved in the laboratory!

by Ken Bloom at March 19, 2015 06:18 PM

Quantum Diaries

On Being an Artwork

Back when we were discussing Will Self’s impression of CERN as a place where scientists had no interest in the important philosophical questions, I commented that part of the trouble was Self’s expectation that scientists who were expecting to give him a technical tour should be prepared to have an ad hoc philosophical discussion instead. I also mentioned that many physicists can and will give interviews on broader topics. What I didn’t mention is that I included myself, because I had already done an interview in 2014 on the “existential” boundaries of physics knowledge. I didn’t know at the time that that interview had already been made part of an art installation! I ran across the installation by chance recently, and I think it’s worth taking a close look at because it provides a positive example of substantive engagement between art, philosophy, and science.

The installation is “sub specie aeternitatis”, by Rosalind McLachlan. It is described in a review for Axisweb by Matthew Hearn as “seek[ing] greater understanding not through belief in the knowable, but in asking scientists to address the limitations of their field and forcing them to consider the ‘existential horror’ – the problems of our existence in terms of what we can’t know.” It features five CERN physicists talking all at once on separate screens about questions that physics can’t necessarily answer. If I remember correctly, it looks like mine was “What happened before the Big Bang?”

I didn’t get to see the exhibit itself, but the review makes it clear that the installation went far beyond simply showing video of the interviewees. The artist made conscious decisions about how to weave our words and surroundings together:

Whilst at any one moment only a single voice plays, thinking aloud – struggling to find meaning – collectively all five characters appear to be working together, evolving a visual language of gesture and animated body movement, grasping to find some shared form of resolution. CERN has been celebrated for the way international communities collaborate, put individual agendas aside, and share knowledge and understanding, and the visual simultaneity within McLachlan’s installation captures this collegiate approach.

The piece thus presents the core values of international collaborative science in a novel way, beyond the mere words scientists usually use to explain it. But it isn’t a new allegory divorced from actual scientists at CERN and our work: it still uses our own words, mannerisms, and office whiteboards to build the impression. What a wonderful example of how art can add new dimensions to communicating about science!

Looking at my part of an excerpt from the installation video, it’s clear why I’m in it. Not so much because of what I’m saying – a lot of it is explained better on Sean Carroll’s blog, even if I do disagree with him sometimes on philosophical interpretations. But because of how I’m saying it: slowly, with long deliberate pauses that allow the other screens to speak and give the impression that I’m working things out as I go along. What I was really doing is working how best to communicate my ideas, but this installation isn’t replicating life as literally as a documentary would. It does replicate how some physicists think about science and philosophy, and how we work together, and I think that’s remarkable.

Links

by Seth Zenz at March 19, 2015 09:38 AM

Marco Frasca - The Gauge Connection

New Lucasian Professor

After a significant delay, Cambridge University made known the name of Michael Green‘s successor at the Lucasian chair.Michael Cates The 19th Lucasian Professor is Michael Cates, Professor at University of Edinburgh and Fellow of the Royal Society. Professor Cates is worldwide known for his researches in the field of soft condensed matter. It is a well deserved recognition and one of the best choice ever for this prestigious chair. So, we present our best wishes to Professor Cates of an excellent job in this new role.


Filed under: Condensed matter physics, News, Physics Tagged: Cambridge University, Edinburgh University, Lucasian Professor, Soft condensed matter

by mfrasca at March 19, 2015 08:58 AM

astrobites - astro-ph reader's digest

On and On They Spin

Title: On the Nature of Rapidly Rotating Single Evolved Stars

Authors: R. Rodrigues da Silva, B. L. Canto Martins, and J. R. De Medeiros

First Author’s Institution: Department of Theoretical and Experimental Physics, Federal University of Rio Grande do Norte

Status of paper: Published in ApJ

Nothing sits still in our Universe. Everything is always on the move. Like planets, stars rotate. Really, they are the most obsessive ballet dancers, perpetually doing spins (or fouetté, if you will) until they die. The authors of this paper found certain types of stars unexpectedly display rapid rotations when they are not supposed to.

Astronomers like — really like — to categorize things. Stars are categorized according to their spectral features (ie, the presence of certain elements in the spectrum) and can be one of the following spectral types: O, B, A, F, G, K, or M (“Oh Be A Fine Girl Kiss Me”).  Temperature decreases from spectral type O to spectral type M, with O stars being the hottest and M stars being the coolest. However, because stars of the same spectral type can have widely different luminosities (and so different radii by the Stefan-Boltzmann Law, which relates luminosity, radius, and surface temperature of a star), a second classification by luminosity is added, where stars are assigned Roman numerals I-IV. The paper today focuses on evolved stars of spectral type G and K and luminosity class IV (subgiants), III (normal giants), II (bright giants), and Ib (supergiants). Supergiants are the brightest and largest, followed by bright giants, normal giants, and finally subgiants.

Humans wind down over the years, and stars do too. Stars spin down as they age, induced by loss of angular momentum through outflows of gas particles ejected from stellar atmospheres, also known as stellar winds. Therefore we expect evolved stars to spin slower than young stars. Evolved G- and K-stars are known to be slow-rotators (rotating at a few km/s), with rotation decreasing gradually from early-G stars to late-K stars. However, as it always the case in astronomy, there are always counter examples. As far back as four decades ago, astronomers found rapidly rotating G and K giant stars (luminosity class III) spinning as fast as 80 km/s.  How and why these stars are able to spin this fast is still a puzzle, with theories ranging from coalescing binary stars, sudden dredge-up of angular momentum from the stellar interiors, and engulfment of hot Jupiters (Jupiter-sized exoplanets that orbit very close to their parent stars, hence the name “hot Jupiters”) by giant stars causing a spin-up.

Using a set of criteria, the authors of this paper hunted for single rapidly-rotating G- and K-stars in the Bright Star Catalog and catalog of G- and K- stars compiled by Egret (1980). Out of 2010 stars, they uncovered a total of 30 new rapidly-rotating stars among subgiants, giants, bright giants, and supergiants. To date, rapid rotators have only been found among giant stars; this work reports for the first time the presence of such rapid rotators among subgiants, bright giants, and supergiants. In fact, these objects make up more than half of the number of rapid rotators in their sample. Figure 1 shows the velocities along line of the sight (v sin i) versus effective temperatures for their sample of evolved rapid rotators, compared with G and K binaries (ie, binary star systems consisting of G- and K-stars). The similarity between the two populations implies a similar synchronization mechanism between the rotation of single evolved stars and orbital motion of the binary systems.  That interesting relation aside, the main point to note from the plot is the large observed velocities of the rapid rotators compared to the mean rotational velocities of G- and K-stars.

Fig1

FIG 1 – Projected rotational velocity along line of sight, v sin i, vs. effective temperature Teff for rapidly rotating single G- and K-stars (filled circles) and rapidly rotating G and K binary systems (open circles). The rectangular zones at the bottom of the figure represent the mean rotational velocities for G- and K-stars that are subgiants (solid line), normal giants (dashed line), and supergiants (dashed–dotted line).

 

The rapidly-rotating stars are analyzed for far-IR excess emission, which may indicate the presence of warm dust surrounding the stars (warm dust emits radiation in the mid- to far-IR regime). Looking at figure 2, a trend of far-IR excess emission is clearly seen for almost all of the 23 stars they analyzed. The origin of dust close to to these stars are not well understood; some attributed it to stellar winds driven by magnetic activity, while others hypothesized that it comes from collisions of planetary companions around these stars. In any case, any theory that tries to explain the nature of rotation in these single systems needs to account for the presence of warm dust.

fig1

FIG 2 – Far-IR colors for 23 G and K single and evolved rapid rotators. The left plot is V-[12] color, where V refers to optical V-band and [12] refers to IRAS‘ 12 µ band) while the right plot is V-[25], where [25] is IRAS’ 25 µ band. The rapid rotators are the red points while dashed, solid, and dotted lines are far-IR colors for normal-behaving G-and K-stars compiled by three different studies. The large offsets of the red points from the lines are evidence of far-IR excess emission.

 

The authors proposed that the coalescence scenario between a star and a low-mass stellar or a substellar companion (ie, a brown dwarf) or the tidal interaction in planetary systems with hot Jupiters to be plausible scenarios that can explain singly rapidly-rotating evolved stars. Because each scenario should produce different chemical abundances, the authors suggested searching for changes in specific abundance ratios, such as the relative enhancement of refractory over volatile elements, in these stars to differentiate between the various possible scenarios above.

Spinning stars are cool. Even cooler are rapidly rotating giant-like stars that spin of the clutches of theoretical predictions. While in the past rapid rotators among evolved giant stars can be explained away using small-number statistics, the authors of this paper added an order of magnitude more items to the list, forcing stellar astrophysicists to come face-to-face with the question of the nature of these rapid rotations.

 

by Suk Sien Tie at March 19, 2015 05:22 AM

March 18, 2015

ZapperZ - Physics and Physicists

CERN's ALPHA Experiment
See, I like this. I like to highlight things that most of the general public simply don't know much about, especially when another major facility throws a huge shadow over it.

This article mentions two important things about CERN: It is more than just the LHC, and it highlights another very important experiment, the ALPHA experiment.

ALPHA’s main aim is to study the internal structure of the antihydrogen atom, and see if there exist any discernible differences within it that set it apart from regular hydrogen. In 2010 ALPHA was the first experiment to trap 38 antihydrogen atoms (an antielectron orbiting an antiproton) for about one-fifth of a second and then the team perfected its apparatus and technique to trap a total of 309 antihydrogen atoms for 1000 s in 2011. Hangst hopes that with the new updated ALPHA 2 device (which includes lasers for spectroscopy), the researchers will soon see precisely what lies within an antihydrogen atom by studying its spectrum. They had a very short test run of a few weeks with ALPHA 2 late last year, and will begin their next set of experiment in earnest in the coming months.

They will be producing more amazing results in the future, because this is all uncharted territory. 

Zz.

by ZapperZ (noreply@blogger.com) at March 18, 2015 09:37 PM

Symmetrybreaking - Fermilab/SLAC

Inside the CERN Control Centre

Take a tour of one of the most important rooms at CERN.

CERN is more than just the Large Hadron Collider. A complex network of beam lines feeds particles from one accelerator to the next, gradually ramping up their energy along the way.

Before reaching the LHC, protons must first zip from the source, down a linear accelerator (Linac2), and through a series of other accelerators (the Proton Synchrotron Booster, the Proton Synchrotron and the Super Proton Synchrotron). Ions accelerated at CERN have their own unique journey through another set of accelerators that eventually bring them to the PS, SPS and finally, the LHC.

At one point, each of CERN’s accelerators had its own team and its own control room—which made communication between the different accelerators cumbersome, says Mike Lamont, the Beam Department’s head of operations. “The guys running the SPS would have to push an intercom to communicate with the PS.” So, during the construction of the LHC, the control rooms were brought together into one room. The CERN Control Centre was born.

If the accelerator complex is CERN’s nervous system, then the CCC is its brain. Let us take you on a tour of one of the most important rooms at CERN.

The islands

The CCC is made up of four “islands,” each a circular arrangement of consoles and displays. Each island hosts the controls for a set of machines.

PS and Booster island

This island controls the Proton Synchrotron (PS) and Booster, two of the oldest accelerators at CERN. The PS was CERN’s flagship machine when it accelerated its first protons in 1959. Now it passes its particles on to the Super Proton Synchrotron, which feeds particles either to the LHC or a number of fixed-target experiments. The PS also serves a number of other users, which include the anti-proton decelerator (the AD) and a neutron experimental facility (nTOF).

SPS island

This island controls the Super Proton Synchrotron, the second largest accelerator in CERN’s complex. It ramps up the energy of protons and ions before diverting them to fixed-target experiments or injecting them into the LHC.

LHC island

This island controls CERN’s largest and most powerful accelerator, the Large Hadron Collider. It’s the end of the line for particles that are about to get the ride of a lifetime. The LHC accelerates protons or ions to even higher energies and drives them into collisions in the center of the massive detectors of the ATLAS, ALICE, CMS and LHCb experiments.

Technical infrastructure island

What would an accelerator be without power? The infrastructure that supports CERN’s accelerator complex is so important that it gets its own island in the CCC. Here, operators oversee things like the ventilation, safety systems and the electrical network. Even during a shutdown when no accelerators are running, there are always two people operating this island. A separate team also based at this island looks after the vast cryogenics system that cools the helium used in the LHC magnets.

Operators

The men and women who oversee the performance of the accelerators are a collection of operators, engineers and physicists. They are responsible for ensuring that all of the equipment in CERN’s massive accelerator complex runs like clockwork.

During operation with beam, there are always at least two operators per island to monitor the machines’ health and safety—even in the middle of the night and over the holidays.

Champagne bottles

This row of empty bottles represents the history of the LHC: first beam in the LHC, record energy, record luminosity, first collisions and about a dozen other events. Operators, physicists and engineers celebrated them all with personalized bottles of bubbly—generously donated by the experiments as a “thank you” to the men and women in the CCC.

Wall screens

How do you make sure an accelerator is healthy? You can check on it in real time. CERN’s accelerators are outfitted with special technology that monitors things such as beam quality, beam intensity, spacing between the proton bunches, cooling and the power supplies. The computer monitors lining the walls of the CCC give the operators real-time updates about the heath of the accelerators so that they can quickly respond if anything goes wrong.

Access Control

Wedged between the computer screens are huge metal boxes with rows of yellow, green and red buttons and dangling keys. It looks like something you might find in a 1960s sci-fi movie, but it is actually the system that controls access to the underground areas.

“This allows us to let people into ring,” Lamont says. “It’s carefully controlled because this area can contain a high level of radiation, so we want to make sure we know who goes in and out.” The need for very high reliability is so important that the operators in the CCC use physical keys and switches instead of a software system.

 

Like what you see? Sign up for a free subscription to symmetry!

by Sarah Charley at March 18, 2015 02:39 PM

Tommaso Dorigo - Scientificblogging

Watch The Solar Eclipse On Friday!

In the morning of March 20th Europeans will be treated with the amazing show of a total solar eclipse. The path of totality is unfortunately confined to the northern Atlantic ocean, and will miss Iceland and England, passing only over the Faroer islands - no wonder there's no hotel room available there since last September! Curiously, the totality will end on the north pole, which on March 20th has the sun exactly at the horizon. Hence the conditions for a great shot like the one below are perfect - I only hope somebody will be at the north pole with a camera...

(Image credit: Fred Bruenjes; apod.nasa.gov)

read more

by Tommaso Dorigo at March 18, 2015 10:29 AM

Lubos Motl - string vacua and pheno

Umbral moonshine and Golay code
Many of us greatly liked Erica Klarreich's article
Mathematicians Chase Moonshine’s Shadow
in the Quanta Magazine. The subtitle summarizes the article as "Researchers are on the trail of a mysterious connection between number theory, algebra and string theory" and it is a balanced and poetic overview of the history of moonshine, its shadowy generalization, and some recent results in the subfield.



In his 1975 paper, Andrew Ogg actually promised a bottle of Jack Daniel's whiskey to the person who proves the connection (page 7-07: "Une bouteille de Jack Daniels est offerte à celui qui expliquera cette coïncidence"). But this bottle seems more pedagogic, especially for readers who are teenagers. Ogg was tempted to buy the bottle to Fields Medal winner Borcherds but Conway said "no, Borcherds only proved things, and not explained the connection". Well, I think that by now, the connection has also been "explained" (and Conway only disagrees because he thinks that the bottle will keep on motivating an army of bright mathematicians on further work) but it seems that Ogg hasn't given the bottle to anyone yet!

I want to offer you some more technical remarks about these amazing mathematical structures and their new organization. The mathematical structure I want to focus on is the Mathieu \(M_{24}\) group and the error-correcting code, the binary Golay code, from which the group may be deduced.




I want to put things in the broadest possible context. There are many complementary ways to do so here. But one global perspective we may take here is the classification of finite simple groups.

It is the work of hundreds of mathematicians who were making their contributions (mostly) during half a century, completed their work about 20 years ago, and the proof of their theorem was originally distributed in 10,000 pages of technical articles in mathematical journals – although some folks have been working on shortening the proof which is undoubtedly possible.




What are the groups, what does it mean to classify them, and what does the resulting classification actually look like? Well, a group is a mathematical structure that is used to describe symmetries mathematically. Technically, it is the set \(G\) of elements – you may interpret each element as an operation \(g\) – that leave some "object" fixed.

But you may (and perhaps you should) forget about any particular visualization of the object. The only thing that you remember is the multiplication table \(gh\) for all the elements. The multiplication is nothing else than the application of both operations in a row; \(gh\) means that you first transform the object by \(h\) and then by \(g\). This order is a convention – and I chose the convention in which the transformed object may be the pure vector \(\ket\psi\) in a quantum mechanical model. It is always possible to imagine that the objects are "wave functions" and the operations themselves (the elements of the group) are linear operators and/or matrices.

Just like the matrix multiplication, the product (composition; the [meta] "operation" combining two operations in the group) must always be associative, \((ab)c=a(bc)\), so that only the ordering of the operations (in this case \(c\), then \(b\), then \(a\)) matters. One of the elements is the identity element \(g=1\) that doesn't change the object at all, so that \(1h=h1=h\) for each \(h\), and there is an inverse element \(g^{-1}\) for each \(g\) such that \(gg^{-1}=g^{-1}g=1\). A group doesn't have to be commutative, \(ab\neq ba\), and most groups are not. But if this holds universally, we call it a commuting or Abelian group (and in that case, we may use the addition \({+}\) as a symbol instead of the multiplication \({\cdot}\) and call it the additive, not multiplicative group).



Well, faces are not exactly \(\ZZ_2\)-symmetric but pretty women get pretty close. Well, the "mirrored right sides" Florence Colgate seems fatter and different but it's OK.

It's an extremely natural definition and the groups – sets of objects like that – are important everywhere. You may always ask what is the symmetry group of some real-world or mathematical object, what is the group of transformations that preserve some "structure" or "characteristic features" of the object. A face is approximately \(\ZZ_2\) symmetric – it's an Abelian group that you may visualize as the set \(\{+1,-1\}\) with the usual multiplication. But there are many more interesting groups.

The classification – the way how to list all the groups – has been completed for finite simple groups. They're finite if the sets contain a finite number of elements – the number of elements is a finite integer. So they're obviously "discrete" groups. Moreover, if you repeat any operation many times, you get back to the identity.

The word "simple" has a technical meaning. It means that the group contains no "normal" or "invariant" (synonymous) subgroups. A normal subgroup \(H\) of group \(G\) is a set \(H\) such that \(gHg^{-1}=H\) for any \(g\in G\) – where \(gHg^{-1}\) is meant to denote the set of all \(ghg^{-1}\) where \(h\) is taken over all \(h\in H\). That's how Wolfram Mathematica would deal with products involving sets.

You may define a direct product \(G\times H\) of two groups. It's a set of all ordered pairs \((g,h)\) where \(g\in G\), \(h\in H\), and the multiplication (composition) applies to the first, \(G\)-like, and the second, \(H\)-like, component of the pair separately. Unless \(G=\{1\}\) or \(H=\{1\}\) i.e. unless one of the factors is a trivial 1-element group (that only contains the identity), \(G\) as well as \(H\) are normal subgroups of \(G\times H\), and \(G\times H\) is therefore not simple.

However, the direct product is not the only way how you may build a non-simple group. You may also define the semidirect product \(G\rtimes_\phi H\) in which the first component of the product isn't \(g_1g_2\) but \(g_1\phi_{h_2}(g_2)\) i.e. twisted by some homomorphism. In that case, \(G\) is still a normal subgroup although the whole group isn't a direct product.

As you can see, non-simple groups are "almost" direct products of simple groups, or semidirect ones, and their numbers of elements are simply the products of the numbers of elements of the factors. In this sense, the whole production of non-simple groups is analogous to getting composite numbers out of primes – except that primes may only be multiplied while the simple groups may be "directly" or "semidirectly" (many types classified by homomorphisms) multiplied.

The list of finite simple groups is therefore the group theory's counterpart of the list of primes, \(2,3,5,7,\dots\).

Now, what the list is? What is the result of the classification?

Even though the proof required 10,000 complicated pages, the sketch of the result seems to be concise. The big theorem says that a finite simple (="prime") group has to be (isomorphic to i.e. being a relabeled copy of) either [read only the bold face if the blockquote below looks too long to you]
  • a cyclic group \(\ZZ_p\) which is the additive group \(\{0,1,2,\dots p-1\}\) where the addition occurs "modulo \(p\)" and where the number of elements \(p\) (the "order" of the group) is a prime for the group to be simple. Note that they're Abelian groups. You may also represent \(\ZZ_p\) as the group of rotations around a point by multiples of \(360^\circ/p\); a subgroup of \(U(1)\) i.e. absolute-value-one complex numbers whose multiplication gives you these rotations. Every finite Abelian group is a direct product of groups \(\ZZ_{p^n}\) whose order is a power of prime, but you need a strict prime for the simplicity.
  • an alternating group \(A_n\). The simplicity condition requires \(n\geq 5\); this condition is related to the fact that 5th and higher order algebraic equations can't be analytically solved. The alternating group \(A_n\) contains all even permutations of \(n\) elements (i.e. permutations that may be written as a product of an even number of transpositions); so its order is \(n!/2\).
  • a simple group of Lie type. You may write down lots of finite groups by using the terminology and logic from Lie groups of matrices, \(PSL,U,Sp,O(n,F)\), except that you don't allow the matrix entries to be real or complex numbers i.e. \(F=\RR\) or \(F=\CC\). Instead, for the sake of finiteness of the group, you choose \(F\) to be a finite field (something like the real numbers or complex numbers, a set where you may add and multiply the elements with the usual conditions), and those are completely classified and it's much easier than to classify the simple groups. You may also replace the "easy" Lie groups by the exceptional ones \(E,F,G\) – yes, those with the Dynkin diagrams but with real/complex numbers replaced by finite fields – and by twisted groups – those with some extra upper left numerical superscript.
  • Semisporadic, Tits group. I wrote it as an extra category because it's sometimes counted in the previous kingdom and sometimes to the following one – as the 27th sporadic group. It's "almost" like one of the groups above \({}^2 F_4(2)'\) derived from the exceptional Lie group \(F_4\) with some twist except that for one particular choice of the twist and the field, you need to make one more step, to consider a "derived subgroup", which is almost the same group as the original one but not quite, and that's why you also lose a "BN pair" so the Tits group doesn't quite agree with the properties in the previous "Lie type" category. You may count it as the easiest – but not smallest – among the "27" sporadic groups. The Tits group wasn't named according to any body parts.
  • Twenty-six sporadic groups. At the level of humanities, the word "sporadic" means almost the same as "exceptional" but the latter word had already been taken so a new adjective was reserved for groups that are "exceptional" in a completely new way. As I have mentioned, the Tits group is sometimes counted as the "most similar to the regular ones", 27th sporadic group. Its order is less than 18 million but other sporadic groups are smaller. On the contrary, the largest sporadic group is the monster group followed by the baby monster group. The Mathieu \(M_{24}\) group is the largest Mathieu sporadic group and I will call it the third most fundamental sporadic group.
That's it. You may see that the "amount of wisdom" (or text needed to capture it) is finite but the ecosystem of the finite groups is very diverse, anyway. Like the "evolutionary tree of life", it contains lots of animals of different complexity with various relationships to other things, variable number of subspecies (breeds and races), and so on.

It's really fascinating – but quite typical in deep mathematics – that a simple task such as "classify all finite simple groups" (with simple and natural definitions) leads to a similarly complex, structured answer.

In the list of the "kingdoms of groups" in the classification above, the complexity is increasing as you go from the top to the bottom. Well, the complexity is a subjective matter (not a rigorously defined one) but I think that almost all mathematicians would agree. As I mention at the end, it may only be humans who see it in this way; God may see it in the opposite way. ;-)



Rubik's cube example

To show you a complex enough example – a good one to see how it typically works – let's consider the Rubik's cube group (Flash). One may perform operations with Rubik's cube and they form a finite group. The operations may be mapped, in a one-to-one way, to different states of the cube (because you may get any state of the cube by an "operation" from the chosen benchmark state, e.g. the sorted one).

It's a cute toy, Mr Rubik has earned tons of money, and everything seems to fit together. So if I ask you where the Rubik's cube group fits into the classification of the finite groups, you may be tempted to say that it's somewhere near the end. Perhaps, it's the monster group, isn't it?
Off-topic: Leslie Winkle and her equally subpar loop quantum gravity colleagues were proved wrong once again. Photons move by the same frequency-independent speeds and only the kind of "quantum foam" that can't change this fact is allowed.
However, despite all the money, the finite group of the operations with Rubik's cube is one of the early boring ones. Well, first of all, the group isn't simple. In other words, it has a normal subgroup. What is it? It's the group \(C_0\) of all the operations that preserve the location of every block but may rotate the blocks around (the corner blocks by \(\ZZ_3\); the middle-of-the-edge blocks by \(\ZZ_2\)). It's a normal group because if you conjugate such a "local rotation of the blocks" by a permutation of the blocks, the permutations cancel and you will get some, generally different "local rotation of the blocks" again (it's generally different because we are \(\ZZ_2\) or \(\ZZ_3\) rotating different, permuted blocks than without the conjugation).

This group \(C_0\) is actually totally boring. It is isomorphic to the Abelian group\[

C_0 = \ZZ_3^7 \times \ZZ_2^{11}.

\] The cube has 8 corners and almost all of them, except for one, may be rotated by \(\ZZ_3\) freely. However, the "required" rotation of the last 8th corner is determined by the other seven; recall that if only one corner is rotated by 120 degrees, there is no way to fix it. That's why the exponent with \(\ZZ_3\) is just seven. And similarly, the cube has twelve edges but one last middle-of-the-edge wrong block can't be fixed, so the exponent above \(\ZZ_2\) is just eleven.

Great. So the normal group is totally boring. The Rubik's cube group is a semidirect product\[

G = C_0 \rtimes C_p

\] where I have to describe the other factor of the semidirect product, \(C_p\). I should also describe the homomorphism needed to construct the semidirect product but it wouldn't be too difficult. The group \(C_p\) itself is actually also unremarkable,\[

C_p = (A_8\times A_{12}) \rtimes \ZZ_2.

\] It's just "all the even permutations of the 8 corner blocks" and "all the even permutations of the 12 middle-of-edge blocks" [thanks for the fix], and some extra \(\ZZ_2\) operation that mixes them in a correlated way; I actually think that the extra \(\ZZ_2\) simply means that an odd permutation of the corners is allowed in combination with an odd permutation of the edges. (It's quite common that only even permutations may be obtained; it is also the case of Loyd's 15 puzzle.) The homomorphisms needed for the two semidirect products contained in \(G\) and \(C_p\) respectively deserve some extra discussion but if you look at the "simple factors" that appear in the Rubik's cube group, they are just \(\ZZ_2,\ZZ_3,A_8,A_{12}\), and that's it. The group theory of Rubik's cube is just a simple conglomerate of several simplest simple finite groups in the classification.



Off-topic: create a 2-minute video showing why the LHC rocks and win a contest organized by the Fermilab, an Illinois-based fan club of the LHC. If the physicists and P.R. folks pick your work (free of obscenities) – sent before the end of May – you will get tickets for 2 from the U.S. to Chicago plus a visit to the Fermilab.

No groups of the Lie type and no sporadic groups are involved at all! And no Tits, either. If you want to become able to "solve" the cube, you must identify the "elementary moves" i.e. the rotations of the faces/layers as products of the "mathematically elementary" generators of the \(\ZZ_2,\ZZ_3,A_8,A_{12}\) factors, roughly speaking, and revert this relationship so that you will be able to permute the individual blocks (by a sequence of rotations of the faces) and then rotate them "almost separately".

Note that the usual "algorithms for the mortals" first tell you how to place the blocks of the "upper, first layer" at the right places; then how to locally rotate them if needed. The sequences of moves are rather simple and you have lots of freedom because you are allowed to bring new disorder to the "second layer" and the "third layer". Then you do the same thing to the second layer – first doing the sequences of moves that bring the blocks to the right places; and finally rotate them to the right orientations if needed. You are allowed to bring new chaos to the third layer but not to the first layer that has already been polished. Finally, you need to sort the "bottom, third layer". The sequences to do it are longer because you have to preserve the first two layers. Again, you first settle the right locations of the blocks, and then rotate them if needed. I already told you that if one corner block or one middle-of-edge block remains rotated, it's because somebody has dismantled the Hungarian cube before you, and you must break it again to fix the problem (or peel the stickers).

Looking at the sporadic groups

The groups of the Lie type are more complex – especially those based on the exceptional Lie groups – but the sporadic groups are even more remarkable. They're 26 or 27 unusual beasts – beasts that, unlike all other simple finite groups, don't arrive in infinite families. They cannot be "mass produced", if you wish. They depend on no integer parameters that could be made arbitrarily large.

Each sporadic group requires a special discussion and boasts its individual virtues and problems. I have said that the semisporadic, Tits group, arises from some technical problem that appears when the exceptional Lie group \(F_4\) twisted in an allowed way is using one of the finite fields based on \(\ZZ_2\).

On the contrary, the largest sporadic groups have much more grandiose stories. The largest sporadic group is the monster group – with almost \(10^{54}\) elements. This is the group related to the \(j\)-function, in some sense the "most important" function on the fundamental domain of \(SL(2,\ZZ)\). The monster group is the largest sporadic group and the master among them – in a similar way in which \(E_8\) is the daddy of exceptional Lie groups. But I need to emphasize that the monster group is in no way "the same thing" as \(E_8\). Their mathematics is equally different; they just happen to be the tips of two icebergs.

The first TRF text about the monstrous moonshine was written in 2006 and many others were added later. String theory has explained why numbers like \(1+196,883\) appear at two seemingly totally unrelated places: one may construct a perturbative string theory, a conformal field theory on the world sheet (some bosonic string compactified on the 24-dimensional torus derived from the Leech lattice, roughly speaking; the Leech lattice is the unique 24-dimensional even self-dual lattice without the vectors of the minimum length that lattices \(\Gamma^{16}\) and \(\Gamma^8+\Gamma^8\) have to produce the \(SO(32)\) and \(E_8\times E_8\) gauge groups), and show that its spectrum enjoys the monster group symmetry. The degeneracies of the states must therefore be (easy enough) dimensions of representations of the monster group, and so on.

Witten has brought evidence that this CFT (with some boundary conditions and co-existence of the left-movers and right-movers) is the holographic AdS/CFT dual of the pure gravity just with black holes in \(AdS_3\), in some sense one of the most structureless theories of quantum gravity. No local graviton or matter excitations there (it's 2+1D), just black holes. It's remarkable – one of the seemingly "most boring" theories of quantum gravity actually has the "most fascinating and largest sporadic" discrete group of symmetries if converted to the exact CFT description.

(Gaiotto showed that only the "minimum" radius comparable to the Planck length has a chance to work – the infinite family of increasingly flat \(AdS_3\) spaces don't admit the monstrous \(CFT_2\) description conjectured by Witten.)

While this complementarity between "super simplicity" and "super complexity" seems intriguing and arguably a principle of mathematics and Nature, I have very limited intuition for "why" the monster group exists at all and how I should imagine it in a way that "fits my brain" completely. The second largest sporadic group is the baby monster group. Its order is over \(10^{33}\), more than the square root of the order of the monster group, and it may be defined as a centralizer of a \(\ZZ_2\) subgroup of the monster group (and probably in many other ways that are harder to formulate).

Mathieu group, umbral moonshine, K3 surfaces, Golay code

I want to spend much more time with the group \(M_{24}\), the largest one among the Mathieu sporadic groups, and related mathematical structures. It is the third most fundamental (but not the third largest, according to the order) sporadic group after the monster and baby monster but I am sure that mathematicians would already disagree at this point (in both ways: John Conway – who is arguably the history's most important explorer of sporadic groups – actually considers \(M_{24}\) to be the most amazing finite group in all of mathematics; I think that K3 linked to this group is cool but, in some sense, "the second" in its depth after the tori and similar simple things).

All the interesting observations below are related to this group. It is sensible to imagine that the amount of "stunning mathematics" of a similar kind is approximately 26 times larger than what you see below, and all of it is "qualitatively different".

First, what is the \(M_{24}\) group? It is a group with almost 245 million elements, substantially fewer than the monster group or the baby monster group. You may build it in various ways from "more regular" groups such as \(PSL(3,4)\). But if you want to see a "full object" whose symmetry group is \(M_{24}\), you can have it: it is the binary Golay code.



Blue is zero, red is one.

The picture above describes "all the nontrivial information" that you can't easily remember and that is needed for the construction of the code. The \(M_{24}\) group – along with the K3-related quantities within string theory – "automatically follow" from this structure if you study it well enough.

The Golay code was discovered in a remarkable 1949 paper by Marcel Golay. Tons of wisdom are linked to this unusual mathematical structure but the original paper – see it by clicking at the URL in this paragraph – was just half a page long! Moreover, Marcel Golay was a guy working for Signal Corps Engineering Laboratories in New Jersey. Those who say that research labs in commercial companies can't produce valuable pure science may look at yet another disproof of their assertion.

What is the problem that Golay was solving? He played with noisy transmission of information, Shannon entropy (greetings to Shannon), and so on. The general problem is how to efficiently transmit (let's only consider binary) information if there is some risk that several of the bits will be reverted due to noise.

Imagine you want to transmit 12 bits of information. If you just transfer 12 bits and some of them are wrong, too bad: the information is transmitted incorrectly. You may transfer those 12 bits twice. If the first copy of the 12 bits disagrees with the second copy due to an error, you know that there is an error. But you don't know which of the two copies of the 12-bit word is right. Moreover, if there are 2 wrong bits among the 24 bits, it may happen that the errors appear on the same place, and you won't even recognize that the seemingly legitimate information (two identical groups of 12 bits) is corrupt. Sending the 12 bits thrice is better – at least, you may pick the "majority form of the 12-bit word" among the 3 copies, with a higher chance of being right. But if there are 2 erroneous bits, it may still happen that you will send corrupt information that looks right – despite the tripled number of information you have sent.

There are better ways to transmit bits so that you may fix the errors and/or be sure that the result is OK, assuming that the percentage of errors remains relatively low (but it's allowed to be higher than 1 wrong bit). And the binary Golay code is one of the greatest – and, in fact, also most important in practice – error-correcting codes.



Let me post this diagram again. You see that the left half of the picture contains sequences from 100000000000 up to 000000000001 – with 11 zeroes and 1 digit one. For each of these "elementary bits", there exists a 12-bit codeword to check the validity of everything that is written, or fixed a few mistakes. By now, most TRF readers probably know how to use the picture above to send the 12 bits reliably.

Twelve zeroes are sent as twenty-four zeroes. To mention a more difficult example, the first row tells you that instead of 100000000000, you send 24 bits 100000000000100111110001. Other lines tell you how to "encode" other sequences of 12 bits where the digit 1 appears exactly once. If you want to send more complicated sequences of 12 bits, you add the corresponding rows by "EXOR" i.e. modulo two in each column. For example, if you want to send 110000000000, you send "the first row EXOR the second row" which means 110000000000110100001011. You surely know how to encode the most general 12 bits (i.e. all of the 4,096 twelve-bit words), too.

Why is exactly this choice of the 12 extra verification bits special? It's because each two allowed sequences of 24 bits differ at many places – they differ by 8 or more bits (among the 24). This (minimum) number of "different bits" among two allowed code words is known as the (minimum) Hamming distance. For the binary Golay code, it happens to be 8 which is a lot. If you "damage" a few random places in the table, the distance will probably be smaller than 8; the "damaged" algorithm to transmit information will be less reliable than Golay's correct one.

You are invited to verify that the distance is never less than 8 on a few examples. For example, pick two random rows in the table expressed as the image and count the number of bits (among the 24) by which the two rows differ. They differ by 2 bits among the first 12 (the \(i\)-th and \(j\)-th bit, of course), and the difference in the remaining "chaotic" 12 bits will always be in 6 (or at least 6) bits. It just works. You need to check the distances for all pairs taken from the 4096 allowed words, not just pairs taken from the 12 "generators", however.

As long as there are at most 7 erroneous bits among the 24, you may safely recognize the "correct" and "damaged" sequences of the 24 bits: there is no risk that the errors will actually create another allowed 24-bit codeword. Moreover, if at most 3 bits among the 24 bits are erroneous, you will know how to fix them. You may be somewhere on the length-8 path between two allowed codewords but to be in the middle, you must be 4 erroneous bits from either side, so if the number of erroneous bits is at most 3, you know "where you should go".

If you use this trick purely for error correction, you may say that it would be enough for the minimum Hamming distance to be 7, and not 8, because if the distance between two allowed codewords is 7, you may still have 3 errors and you know into which side you should move because 3 is still less than 7/2. You may achieve the minimum Hamming distance 7 if you just drop 1 of the 24 bits. The corresponding code is the 23-bit "perfect binary Golay code" \(G_{23}\). This reduction may be helpful for the information science application but as far as I can say, it makes the mathematical structure less natural from the viewpoint of fundamental mathematics and physics which is why I will always talk about the 24-bit code in physics-related texts (and below). It's the more natural mathematical structure, despite the missing adjective "perfect".

It's a cool code which may be very useful in actual transmission of signals in noisy environments. But it has far-reaching implications for mathematics and physics – via string theory.

The reason is that the "automorphism group of the binary Golay code" is the Mathieu group \(M_{24}\) (and it is similarly the less cool and smaller sporadic group, \(M_{23}\), for the truncated "perfect" 23-bit code; if you want to know, Émile Léonard Mathieu introduced the first known sporadic groups \(M_{11},M_{12},M_{22},M_{23},M_{24}\) in 1861 and 1873 and the subscripts are really numbers between eleven and twenty-four although they look like pairs of small integers). One may define the group as a subgroup of the \(S_{24}\), the permutation of the full 24 bits in the code, that leaves the set of \(2^{12}=4096\) allowed codewords "the same" as a set.

(It's historically remarkable that pretty much a century of silence came after the discovery of the Mathieu groups and the following sporadic group, J1, was described by Zvonimir Janko only in 1965. Only afterwards, things sped up: 21 sporadic groups were found within an M-theory-extended decade between 1965 and 1976.)

So a particularly clever and special error-correcting code described in a half-a-page paper from 1949 (and in group-theoretical papers that were much longer but almost 100 years older) is enough to define the third most fundamental sporadic group in group theory! The Mathieu \(M_{24}\) group is the symmetry group of the code.

I must also mention that the binary Golay code (and therefore the group) is also closely related to the Leech lattice (that may be used to define the CFT with the monster group). Why? Because in the Leech lattice, allowed coordinates modulo 8 (times the "quantum" of the coordinate) are in one-to-one correspondence with the allowed binary Golay codes. To get from the monster group, you first need to throw away all the "purely stringy" elements of the symmetry group (only keep the symmetries of the Leech lattice), and then pick those that are a subgroup of \(S_{24}\).



Your screen doesn't have sufficiently many dimensions but this "animated quartic" conveys the spirit of what the K3 surfaces look like.

Relationship to K3 surfaces

If you consider the "simplest", most (super)symmetric compactifications of string/M-theory, the toroidal compactifications are the first ones you consider. They preserve all the supersymmetries of the decompactified spacetime. The simplest non-flat compactification manifold is the K3 surface, one of the family of 4-real-dimensional curved manifolds, a 4-real-dimensional counterpart of the "Calabi-Yau manifolds" (TRF random search, Aspinwall's introduction). One-half of the supersymmetry is preserved. M-theory or type II string theory on a K3 surface is dual (equivalent) to heterotic string theory on a torus (it's called the string-string duality).

On the world sheet, you may calculate some kind of a twisted partition function of the conformal field theory that describes strings on a K3 manifold perturbatively. This partition function is known as the "elliptic genus"\[

Z_{ell}(\tau;z) = {\rm Tr}_{{\mathcal R}\times {\mathcal R}} (-1)^{F_L+F_R} q^{L_0} \bar q^{\bar L_0} e^{4\pi i z J_{0,L}^3}

\] where the extra exponential twists the partition sum by a transformation in the affine \(SU(2)\) algebra – that is another local gauge symmetry on the world sheet just like the diffeomorphisms and Weyl symmetry (plus the local world sheet supersymmetry). The power of \((-1)\) turns this partition function into a supertrace, not a trace, so there are lots of cancellations – similar to those that appear in the indices (the singular is "index"). However, this cancellation is only partial which is why the elliptic genus effectively gets contributions from some "short representations" of SUSY but it remembers some of the properties of these representations. The elliptic genus is therefore kind of holomorphic and "something in between" the index, which is a simple integer, and the generic partition sum, which is a non-holomorphic function.

(If you care about a general technicality explaining something from the previous sentence, note that the exponential factor involving \(J_{0,L}^3\) in the supertrace depends on \(L\), the left-movers only, and breaks the symmetry between the left-movers and right-movers on the world sheet. Due to this extra factor which is a sign, the representations that are completely annihilated by the right-moving supersymmetries see a complete Bose-Fermi cancellation in the supertrace; while those annihilated by the left-moving supercharges don't. [Or vice versa? Be careful if you need that.] That's why the elliptic genus behaves as an index from the viewpoint of the right-moving degrees of freedom; but as a partition sum from the viewpoint of the left-moving ones. The elliptic genus is literally a heterosis – a left-right hybrid – of an index and a partition sum, in the same sense in which the heterotic string is a heterosis of the bosonic string and the superstring. And that's why it, the elliptic genus, has a holomorphic dependence but no anti-holomorphic one.)

In 2010, Eguchi, Ooguri, and Tachikawa noticed that there apparently exists a new kind of a moonshine that involves the perturbative string theories involving the K3 surfaces (TRF 2010).

In the ordinary monstrous moonshine, one expands the \(j\)-function as\[

j(\tau) = \frac 1q + 744 + 196884 q + 21493760 q^2 + \dots

\] where \(q\equiv \exp(2\pi i \tau)\) and sees dimensions of simple representations of the monster group such as \(1+196,883\) everywhere. Similarly, Eguchi et al. expanded the elliptic genus for K3 – well, I will write the expansion for \(\Sigma(\tau)\) which is related to \(Z_{ell}(K3)(\tau; z)\) by a rather simple relationship that nevertheless depends on modular functions that not everyone knows (equation 1.7 in Eguchi et al.) – and they saw that it was\[

\eq{
\frac{\Sigma(\tau)}{-2q^{-1/8}} &= 1 - 45 q - 231 q^2 -770 q^3 - \\
&- 2277q^4 -5796 q^5 -\\
&-13915 q^6 - 30843 q^7-\dots
}

\] Much like in the case of the \(j\)-function, the coefficients in front of \(q^n\) are rather interesting integers. Well, they are integers smaller than and less impressive than 196,884. But there are many of them that are interesting enough.

Well, the fun is that if you look at dimensions of irreducible representations of the Mathieu \(M_{24}\) groups, you will find numbers \(1,45,231,770,2277,5796\) among them (among just 20 similarly large numbers describing the dimensions of the irreps). The following two coefficients, \(13915,30843\), are dimensions of reducible representation, i.e. simple and unique sums of (two or six) numbers describing the dimensions of the irreps. The following coefficients (not shown above) may also be decomposed but the decomposition is no longer unique.

Because of the experience with the monstrous moonshine, Eguchi et al. were already pretty much sure that the agreement between these numbers can't be a coincidence. There must exist an explanation – a different one than in the case of the monstrous moonshine; but one that plays the same role – which links the two seemingly different mathematical tasks, namely the third most fundamental sporadic group with the partition functions on K3 surfaces in string theory.

The task of demystifying these connections becomes virtually complete when one constructs the corresponding perturbative string theory with the sporadic symmetry; but with demonstrable links of its (twisted) partition sums to the modular functions such as \(j\) and \(\Sigma(K3)\) above. Mathematicians are spending lots of time and they have proved "almost everything" that satisfies them – which is not quite the complete "visualization in terms of string theory" but it is close.

They like to prove that it is possible to write the modular functions as some series – McKay-Thompson series – of some infinite-dimensional representations of a certain type. The "infinite-dimensional representations" (or "graded modules", in the refined jargon of the mathematicians), are "almost" the spectra of the relevant string theory, but they don't construct the string theory as explicitly as string theorists would want so I think it's fair to say that the mathematicians as not cracking the problem as completely as physicists (string theorists) would demand.

So far, the latest proof in this mathematical industry was published in early March 2015. Duncan, Griffin, and Ono have developed the proof (linking the modular forms with some infinite-dimensional representations via some series) in the case of 22 remaining examples of "umbral moonshine" that were conjectured in the literature. The proof for \(M_{24}\) that we focused upon was settled by Terry Gannon in late 2012 (while some previous insights were made by Gaberdiel et al.). You may see that the progress is relatively fast here.

Here, things get very technical – and they're not formulated in the physicist-friendly language I would find easy to devour – so let me remain superficial. The adjective "umbral" is derived from the Latin word "umbra" for a "shadow" – and it's used for these non-monstrous versions of moonshine because the corresponding mock modular forms always allow you to compute an affiliated modular function that is a "shadow" of the mock modular form. The "umbral moonshine" theorem is meant to be a generalization of the "Mathieu moonshine" to numerous other groups that, like \(M_{24}\), urge you to use the shadow modular functions; the monstrous moonshine doesn't belong here.

There seems to be an intermediate step which may be the reason why it is believed that the string theories unmasking these types of moonshine don't actually have the exact sporadic symmetry group whose representations appear; but they have a symmetry group related to it in some way, too.

Some complications exist but at the end, I believe that sometime in the future, people will have the full description of the relevant string theories that unmask all the shocking surprises. Maybe all of them will be some rather simple orbifolds etc. based on the Leech-lattice CFT we know from monstrous moonshine. I find it likely that the "24 bits" the Golay code will be in one-to-one correspondence with the 24 dimensions in the Leech lattice – effectively with the 24 purely transverse dimensions of the bosonic string theory that is helpful. And whenever K3 will be involved, I think that these 24 directions will be mapped to the K3 cohomology, so the 24-dimensional flat description will be closely related to the heterotic dual description of a K3 in the string-string duality, despite the wrong 24+0 (in the \(M_{24}\) structures) instead of 20+4 signature (intersection numbers of homologies in K3). Note that in the heterotic description, the signature means that the (20) positive-signature dimensions become left-movers and the (4) negative-signature ones become right-movers so this "asymmetry" has to be liquidated in some way and everything must be made left-moving.

Religious implications

This title is perhaps somewhat over the edge – but I think that not too much. What do I mean that these things have religious implications?

I think that when you look at some mathematical structures – or features or compactifications of string/M-theory – they are often ordered hierarchically in a way that resembles the list in the classification of the finite simple groups.

Note that at the beginning, you have the "easy" structures that may be constructed from pieces that are available to humans. \(\ZZ_p\) and then \(A_n\) and slightly more complicated ones. Things get more complicated but when clumped properly, the path to the most shocking and exceptional structures – ending with the sporadic groups and the monster group in particular – is finite.

My meme is that the easy constructive groups like \(\ZZ_2\) are close to humans who are mortal; low-brow individuals such as Lee Smolin proudly remain attached to this wild primitive animalistic side of the chain. And the opposite side is close to God (and refined string theorists). OK, I hope you forgave me that religious interpretation. ;-)

We understand things that are human but there is a dual perspective of God who primarily understands the things on the opposite end, like the monster group, and who needs to perform some special activity – i.e. offer the apples that explain sex – in order to introduce sins and to break the beauty of the divine world and to create all the mundane unspiritual stuff that we know from the everyday life, like the Rubik's cubes I described in some detail.

The path between Man and God is finite, when organized properly, and the more we internalize the thinking that makes the sporadic groups or the monster look like the easy starting points, the closer to the perspective of God we become. All truly valuable ideas in mathematics and physics may perhaps be organized using this divine, string-theory-based perspective on a sunny day in the future. Even the realistic vacua of string/M-theory will be viewed as a partial symmetry breaking of the truly God-like compactifications boasting things like the monster group symmetry. Monstrous insights will therefore be a part of the answer to currently controversial questions such as "what was there before the big bang".

And that's the memo.



The first bonus: the hierarchy of power organizing the sporadic groups. It's remarkable that such a messy graph is "pure mathematics", isn't it? Extraterrestrials may draw it in the same way.

The lines essentially denote embedding as subgroups. The monster M is at the top, and the baby monster B and 18 other sporadic groups consider the monster M their holy father; there are 20 members of this Monstrous Catholic Church. (The Catholics call themselves "the happy family" which is clearly propaganda, so I prefer "Catholics" LOL.) There is some opposition, too. The six groups away from the monster-led hierarchy are not known as heretics or renegades or mavericks. It would be too silly to use such words. Instead, they are the pariah groups. ;-)

They include J1 (a group that may be uniquely specified by some 2-Sylow subgroups and their properties), J3, J4, the three of the J-groups – Catholic J2, the Hall-Janko group, is mostly linked to these three sociologically, by the Croatian mathematician Zvonimír Janko who found the four, not by some intrinsic system. You see that J4, a pariah, was rather close to the church, anyway, and \(M_{24}\) emphasized in this article is a secret child of the Monstrous Pope M and the pariah J4. That could be related to the fact that this is linked to K3 surfaces which are dual to the heterotic string, also a hybrid of two very different parents. In the right lower corner, you see J1 embedded in the O'Nan group ON, and J3 along with the Rudvalis group Ru.

The last pariah I haven't mentioned is the (far left) Lyons group Ly. Like other far leftists, the Lyons group seems to have no importance besides its existence. I hope that the Catholic readers will be thrilled to learn that to a lesser extent, it is the case of most pariahs. Well, so far. One wants to believe that the other sporadic groups must also be "comparably important" to M or \(M_{24}\) and we're only ignorant about their role because the pariahs have been discriminated against; but the alternative assumption that they're really useless junk is plausible, too. ;-)



P.S. Looking at the moonshine things may be a great way to spend the remaining 315 days that we have before the planet becomes a total frying pan according to the world's president of the climate alarm. ObamaCare isn't sufficiently funded to store this individual in a psychiatric asylum yet.

by Luboš Motl (noreply@blogger.com) at March 18, 2015 08:27 AM

March 17, 2015

Symmetrybreaking - Fermilab/SLAC

Experiments combine to find mass of Higgs

The CMS and ATLAS experiments at the Large Hadron Collider joined forces to make the most precise measurement of the mass of the Higgs boson yet.

On the dawn of the Large Hadron Collider restart, the CMS and ATLAS collaborations are still gleaning valuable information from the accelerator’s first run. Today, they presented the most precise measurement to date of the Higgs boson’s mass.

“This combined measurement will likely be the most precise measurement of the Higgs boson’s mass for at least one year,” says CMS scientist Marco Pieri of the University of California, San Diego, co-coordinator of the LHC Higgs combination group. “We will need to wait several months to get enough data from Run II to even start performing any similar analyses.”

The mass is the only property of the Higgs boson not predicted by the Standard Model of particle physics—the theoretical framework that describes the interactions of all known particles and forces in the universe.

The mass of subatomic particles is measured in GeV, or giga-electronvolts. (A proton weighs about 1 GeV.) The CMS and ATLAS experiments measured the mass of the Higgs to be 125.09 GeV ± 0.24. This new result narrows in on the Higgs mass with more than 20 percent better precision than any previous measurements.

Experiments at the LHC measure the Higgs by studying the particles into which it decays. This measurement used decays into two photons or four electrons or muons. The scientists used data collected from about 4000 trillion proton-proton collisions.

By precisely pinning down the Higgs mass, scientists can accurately calculate its other properties—such as how often it decays into different types of particles. By comparing these calculations with experimental measurements, physicists can learn more about the Higgs boson and look for deviations from the theory—which could provide a window to new physics.

“This is the first combined publication that will be submitted by the ATLAS and CMS collaborations, and there will be more in the future," says deputy head of the ATLAS experiment Beate Heinemann, a physicist from the University of California, Berkeley, and Lawrence Berkeley National Laboratory.

ATLAS and CMS are the two biggest Large Hadron Collider experiments and designed to measure the properties of particles like the Higgs boson and perform general searches for new physics. Their similar function allows them to cross check and verify experimental results, but it also inspires a friendly competition between the two collaborations.

“It’s good to have competition,” Pieri says. “Competition pushes people to do better. We work faster and more efficiently because we always like to be first and have better results.”

Normally, the two experiments maintain independence from one another to guarantee their results are not biased or influenced by the other. But with these types of precision measurements, working together and performing combined analyses has the benefit of strengthening both experiments’ results.

“CMS and ATLAS use different detector technologies and different detailed analyses to determine the Higgs mass,” says ATLAS spokesperson Dave Charlton of the University of Birmingham. “The measurements made by the experiments are quite consistent, and we have learnt a lot by working together, which stands us in good stead for further combinations.”

It also provided the unique opportunity for the physicists to branch out from their normal working group and learn what life is like on the other experiment.

“I really enjoyed working with the ATLAS collaboration,” Pieri says. “We normally always interact with the same people, so it was a real pleasure to get to know better the scientists working across the building from us.”

With this groundwork for cross-experimental collaboration laid and with the LHC restart on the horizon, physicists from both collaborations look forward to working together to increase their experimental sensitivity. This will enable them not only to make more precise measurements in the future, but also to look beyond the Standard Model into the unknown.

 

Like what you see? Sign up for a free subscription to symmetry!

by Sarah Charley at March 17, 2015 03:38 PM

John Baez - Azimuth

Planets in the Fourth Dimension

You probably that planets go around the sun in elliptical orbits. But do you know why?

In fact, they’re moving in circles in 4 dimensions. But when these circles are projected down to 3-dimensional space, they become ellipses!

This animation by Greg Egan shows the idea:

The plane here represents 2 of the 3 space dimensions we live in. The vertical direction is the mysterious fourth dimension. The planet goes around in a circle in 4-dimensional space. But down here in 3 dimensions, its ‘shadow’ moves in an ellipse!

What’s this fourth dimension I’m talking about here? It’s a lot like time. But it’s not exactly time. It’s the difference between ordinary time and another sort of time, which flows at a rate inversely proportional to the distance between the planet and the sun.

The movie uses this other sort of time. Relative to this other time, the planet is moving at constant speed around a circle in 4 dimensions. But in ordinary time, its shadow in 3 dimensions moves faster when it’s closer to the sun.

All this sounds crazy, but it’s not some new physics theory. It’s just a different way of thinking about Newtonian physics!

Physicists have known about this viewpoint at least since 1980, thanks to a paper by the mathematical physicist Jürgen Moser. Some parts of the story are much older. A lot of papers have been written about it.

But I only realized how simple it is when I got this paper in my email, from someone I’d never heard of before:

• Jesper Göransson, Symmetries of the Kepler problem, 8 March 2015.

I get a lot of papers by crackpots in my email, but the occasional gem from someone I don’t know makes up for all those.

The best thing about Göransson’s 4-dimensional description of planetary motion is that it gives a clean explanation of an amazing fact. You can take any elliptical orbit, apply a rotation of 4-dimensional space, and get another valid orbit!

Of course we can rotate an elliptical orbit about the sun in the usual 3-dimensional way and get another elliptical orbit. The interesting part is that we can also do 4-dimensional rotations. This can make a round ellipse look skinny: when we tilt a circle into the fourth dimension, its ‘shadow’ in 3-dimensional space becomes thinner!

In fact, you can turn any elliptical orbit into any other elliptical orbit with the same energy by a 4-dimensional rotation of this sort. All elliptical orbits with the same energy are really just circular orbits on the same sphere in 4 dimensions!

Jesper Göransson explains how this works in a terse and elegant way. But I can’t resist summarizing the key results.

The Kepler problem

Suppose we have a particle moving in an inverse square force law. Its equation of motion is

\displaystyle{ m \ddot{\mathbf{r}} = - \frac{k \mathbf{r}}{r^3} }

where \mathbf{r} is its position as a function of time, r is its distance from the origin, m is its mass, and k says how strong the force is. From this we can derive the law of conservation of energy, which says

\displaystyle{ \frac{m \dot{\mathbf{r}} \cdot \dot{\mathbf{r}}}{2} - \frac{k}{r} = E }

for some constant E that depends on the particle’s orbit, but doesn’t change with time.

Let’s consider an attractive force, so k > 0, and elliptical orbits, so E < 0. Let’s call the particle a ‘planet’. It’s a planet moving around the sun, where we treat the sun as so heavy that it remains perfectly fixed at the origin.

I only want to study orbits of a single fixed energy E. This frees us to choose units of mass, length and time in which

m = 1, \;\; k = 1, \;\; E = -\frac{1}{2}

This will reduce the clutter of letters and let us focus on the key ideas. If you prefer an approach that keeps in the units, see Göransson’s paper.

Now the equation of motion is

\displaystyle{\ddot{\mathbf{r}} = - \frac{\mathbf{r}}{r^3} }

and conservation of energy says

\displaystyle{ \frac{\dot{\mathbf{r}} \cdot \dot{\mathbf{r}}}{2} - \frac{1}{r} = -\frac{1}{2} }

The big idea, apparently due to Moser, is to switch from our ordinary notion of time to a new notion of time! We’ll call this new time s, and demand that

\displaystyle{ \frac{d s}{d t} = \frac{1}{r} }

This new kind of time ticks more slowly as you get farther from the sun. So, using this new time speeds up the planet’s motion when it’s far from the sun. If that seems backwards, just think about it. For a planet very far from the sun, one day of this new time could equal a week of ordinary time. So, measured using this new time, a planet far from the sun might travel in one day what would normally take a week.

This compensates for the planet’s ordinary tendency to move slower when it’s far from the sun. In fact, with this new kind of time, a planet moves just as fast when it’s farthest from the sun as when it’s closest.

Amazing stuff happens with this new notion of time!

To see this, first rewrite conservation of energy using this new notion of time. I’ve been using a dot for the ordinary time derivative, following Newton. Let’s use a prime for the derivative with respect to s. So, for example, we have

\displaystyle{ t' = \frac{dt}{ds} = r }

and

\displaystyle{ \mathbf{r}' = \frac{dr}{ds} = \frac{dt}{ds}\frac{dr}{dt} = r \dot{\mathbf{r}} }

Using this new kind of time derivative, Göransson shows that conservation of energy can be written as

\displaystyle{ (t' - 1)^2 + \mathbf{r}' \cdot \mathbf{r}' = 1 }

This is the equation of a sphere in 4-dimensional space!

I’ll prove this later. First let’s talk about what it means. To understand it, we should treat the ordinary time coordinate t and the space coordinates (x,y,z) on an equal footing. The point

(t,x,y,z)

moves around in 4-dimensional space as the parameter s changes. What we’re seeing is that the velocity of this point, namely

\mathbf{v} = (t',x',y',z')

moves around on a sphere in 4-dimensional space! It’s a sphere of radius one centered at the point

(1,0,0,0)

With some further calculation we can show some other wonderful facts:

\mathbf{r}''' = -\mathbf{r}'

and

t''' = -(t' - 1)

These are the usual equations for a harmonic oscillator, but with an extra derivative!

I’ll prove these wonderful facts later. For now let’s just think about what they mean. We can state both of them in words as follows: the 4-dimensional velocity \mathbf{v} carries out simple harmonic motion about the point (1,0,0,0).

That’s nice. But since \mathbf{v} also stays on the unit sphere centered at this point, we can conclude something even better: v must move along a great circle on this sphere, at constant speed!

This implies that the spatial components of the 4-dimensional velocity have mean 0, while the t component has mean 1.

The first part here makes a lot of sense: our planet doesn’t drift ever farther from the Sun, so its mean velocity must be zero. The second part is a bit subtler, but it also makes sense: the ordinary time t moves forward at speed 1 on average with respect to the new time parameter s, but its rate of change oscillates in a sinusoidal way.

If we integrate both sides of

\mathbf{r}''' = -\mathbf{r}'

we get

\mathbf{r}'' = -\mathbf{r} + \mathbf{a}

for some constant vector \mathbf{a}. This says that the position \mathbf{r} oscillates harmonically about a point \mathbf{a}. Since \mathbf{a} doesn’t change with time, it’s a conserved quantity: it’s called the Runge–Lenz vector.

Often people start with the inverse square force law, show that angular momentum and the Runge–Lenz vector are conserved, and use these 6 conserved quantities and Noether’s theorem to show there’s a 6-dimensional group of symmetries. For solutions with negative energy, this turns out to be the group of rotations in 4 dimensions, \mathrm{SO}(4). With more work, we can see how the Kepler problem is related to a harmonic oscillator in 4 dimensions. Doing this involves reparametrizing time.

I like Göransson’s approach better in many ways, because it starts by biting the bullet and reparametrizing time. This lets him rather efficiently show that the planet’s elliptical orbit is a projection to 3-dimensional space of a circular orbit in 4d space. The 4d rotational symmetry is then evident!

Göransson actually carries out his argument for an inverse square law in n-dimensional space; it’s no harder. The elliptical orbits in n dimensions are projections of circular orbits in n+1 dimensions. Angular momentum is a bivector in n dimensions; together with the Runge–Lenz vector it forms a bivector in n+1 dimensions. This is the conserved quantity associated to the (n+1) dimensional rotational symmetry of the problem.

He also carries out the analogous argument for positive-energy orbits, which are hyperbolas, and zero-energy orbits, which are parabolas. The hyperbolic case has the Lorentz group symmetry and the zero-energy case has Euclidean group symmetry! This was already known, but it’s nice to see how easily Göransson’s calculations handle all three cases.

Mathematical details

Checking all this is a straightforward exercise in vector calculus, but it takes a bit of work, so let me do some here. There will still be details left to fill in, and I urge that you give it a try, because this is the sort of thing that’s more interesting to do than to watch.

There are a lot of equations coming up, so I’ll put boxes around the important ones. The basic ones are the force law, conservation of energy, and the change of variables that gives

\boxed{  t' = r , \qquad  \mathbf{r}' = r \dot{\mathbf{r}} }

We start with conservation of energy:

\boxed{ \displaystyle{ \frac{\dot{\mathbf{r}} \cdot \dot{\mathbf{r}}}{2} -  \frac{1}{r}  = -\frac{1}{2} } }

and then use

\displaystyle{ \dot{\mathbf{r}} = \frac{d\mathbf{r}/dt}{dt/ds} = \frac{\mathbf{r}'}{t'} }

to obtain

\displaystyle{ \frac{\mathbf{r}' \cdot \mathbf{r}'}{2 t'^2}  - \frac{1}{t'} = -\frac{1}{2} }

With a little algebra this gives

\boxed{ \displaystyle{ \mathbf{r}' \cdot \mathbf{r}' + (t' - 1)^2 = 1} }

This shows that the ‘4-velocity’

\mathbf{v} = (t',x',y',z')

stays on the unit sphere centered at (1,0,0,0).

The next step is to take the equation of motion

\boxed{ \displaystyle{\ddot{\mathbf{r}} = - \frac{\mathbf{r}}{r^3} } }

and rewrite it using primes (s derivatives) instead of dots (t derivatives). We start with

\displaystyle{ \dot{\mathbf{r}} = \frac{\mathbf{r}'}{r} }

and differentiate again to get

\ddot{\mathbf{r}} = \displaystyle{ \frac{1}{r} \left(\frac{\mathbf{r}'}{r}\right)' }  = \displaystyle{ \frac{1}{r} \left( \frac{r \mathbf{r}'' - r' \mathbf{r}'}{r^2} \right) } = \displaystyle{ \frac{r \mathbf{r}'' - r' \mathbf{r}'}{r^3} }

Now we use our other equation for \ddot{\mathbf{r}} and get

\displaystyle{ \frac{r \mathbf{r}'' - r' \mathbf{r}'}{r^3} = - \frac{\mathbf{r}}{r^3} }

or

r \mathbf{r}'' - r' \mathbf{r}' = -\mathbf{r}

so

\boxed{ \displaystyle{ \mathbf{r}'' =  \frac{r' \mathbf{r}' - \mathbf{r}}{r} } }

To go further, it’s good to get a formula for r'' as well. First we compute

r' = \displaystyle{ \frac{d}{ds} (\mathbf{r} \cdot \mathbf{r})^{\frac{1}{2}} } = \displaystyle{ \frac{\mathbf{r}' \cdot \mathbf{r}}{r} }

and then differentiating again,

r'' = \displaystyle{\frac{d}{ds} \frac{\mathbf{r}' \cdot \mathbf{r}}{r} } = \displaystyle{ \frac{r \mathbf{r}'' \cdot \mathbf{r} + r \mathbf{r}' \cdot \mathbf{r}' - r' \mathbf{r}' \cdot \mathbf{r}}{r^2} }

Plugging in our formula for \mathbf{r}'', some wonderful cancellations occur and we get

r'' = \displaystyle{ \frac{\mathbf{r}' \cdot \mathbf{r}'}{r} - 1 }

But we can do better! Remember, conservation of energy says

\displaystyle{ \mathbf{r}' \cdot \mathbf{r}' + (t' - 1)^2 = 1}

and we know t' = r. So,

\mathbf{r}' \cdot \mathbf{r}' = 1 - (r - 1)^2 = 2r - r^2

and

r'' = \displaystyle{ \frac{\mathbf{r}' \cdot \mathbf{r}'}{r} - 1 } = 1 - r

So, we see

\boxed{ r'' = 1 - r }

Can you get here more elegantly?

Since t' = r this instantly gives

\boxed{ t''' = 1 - t' }

as desired.

Next let’s get a similar formula for \mathbf{r}'''. We start with

\displaystyle{ \mathbf{r}'' =  \frac{r' \mathbf{r}' - \mathbf{r}}{r} }

and differentiate both sides to get

\displaystyle{ \mathbf{r}''' = \frac{r r'' \mathbf{r}' + r r' \mathbf{r}'' - r \mathbf{r}' - r'}{r^2} }

Then plug in our formulas for r'' and \mathbf{r}''. Some truly miraculous cancellations occur and we get

\boxed{  \mathbf{r}''' = -\mathbf{r}' }

I could show you how it works—but to really believe it you have to do it yourself. It’s just algebra. Again, I’d like a better way to see why this happens!

Integrating both sides—which is a bit weird, since we got this equation by differentiating both sides of another one—we get

\boxed{ \mathbf{r}'' = -\mathbf{r} + \mathbf{a} }

for some fixed vector \mathbf{a}, the Runge–Lenz vector. This says \mathbf{r} undergoes harmonic motion about \mathbf{a}. It’s quite remarkable that both \mathbf{r} and its norm r undergo harmonic motion! At first I thought this was impossible, but it’s just a very special circumstance.

The quantum version of a planetary orbit is a hydrogen atom. Everything we just did has a quantum version! For more on that, see

• Greg Egan, The ellipse and the atom.

For more of the history of this problem, see:

• John Baez, Mysteries of the gravitational 2-body problem.

This also treats quantum aspects, connections to supersymmetry and Jordan algebras, and more! Someday I’ll update it to include the material in this blog post.


by John Baez at March 17, 2015 01:00 AM

March 16, 2015

Jester - Resonaances

Weekend Plot: Fermi and more dwarfs
This weekend's plot comes from the recent paper of the Fermi collaboration:

It shows the limits on the cross section of dark matter annihilation into tau lepton pairs. The limits are obtained from gamma-ray observations of 15 dwarf galaxies during 6 years. Dwarf galaxies are satellites of Milky Way made mostly of dark matter with few stars in it, which makes them a clean environment to search for dark matter signals. This study is particularly interesting because it is sensitive to dark matter models that could explain the gamma-ray excess detected from the center of the Milky Way.  Similar limits for the annihilation into b-quarks have already been shown before at conferences. In that case, the region favored by the Galactic center excess seems entirely excluded. Annihilation of 10 GeV dark matter into tau leptons could also explain the excess. As can be seen in the plot, in this case there is also  large tension with the dwarf limits, although astrophysical uncertainties help to keep hopes alive.  

Gamma-ray observations by Fermi will continue for another few years, and the limits will get stronger.   But a faster way to increase the statistics may be to find more observation targets. Numerical simulations with vanilla WIMP dark matter predict a few hundred dwarfs around the Milky Way. Interestingly, a discovery of several new dwarf candidates was reported last week. This is an important development, as the total number of known dwarf galaxies now exceeds the number of dwarf characters in Peter Jackson movies. One of the candidates, known provisionally as DES J0335.6-5403 or  Reticulum-2, has a large J-factor (the larger the better, much like the h-index).  In fact, some gamma-ray excess around 1-10 GeV is observed from this source, and one paper last week even quantified its significance as ~4 astrosigma (or ~3 astrosigma in an alternative more conservative analysis). However, in the Fermi analysis using  more recent reconstruction Pass-8 photon reconstruction,  the significance quoted is only 1.5 sigma. Moreover the dark matter annihilation cross section required to fit the excess is excluded by an order of magnitude by the combined dwarf limits. Therefore,  for the moment, the excess should not be taken seriously.

by Jester (noreply@blogger.com) at March 16, 2015 11:31 AM

March 15, 2015

Jester - Resonaances

After-weekend plot: new Planck limits on dark matter
The Planck collaboration just released updated results that include an input from their  CMB polarization measurements. The most interesting are the new constraints on the annihilation cross section of dark matter:

Dark matter annihilation in the early universe injects energy into the primordial plasma and increases the ionization fraction. Planck is looking for imprints of that in the CMB temperature and polarization spectrum. The relevant parameters are the dark matter mass and  <σv>*feff, where <σv> is the thermally averaged annihilation cross section during the recombination epoch, and feff ~0.2 accounts for the absorption efficiency. The new limits are a factor of 5 better than the latest ones from the WMAP satellite, and a factor of 2.5 better than the previous combined constraints.

What does it mean for us?  In vanilla models of thermal WIMP dark matter <σv> = 3*10^-26 cm^3/sec, in which case dark matter particles with masses below ~10 GeV are excluded by Planck. Actually, in this mass range the Planck limits are far less stringent the ones obtained by the Fermi collaboration from gamma-ray observations of dwarf galaxies. However, the two are complementary to some extent. For example, Planck probes the annihilation cross section in the early universe, which can be different than today. Furthermore, the CMB constraints obviously do not depend on the distribution of dark matter in galaxies, which is a serious source of uncertainty for cosmic rays experiments.  Finally, the CMB limits extend to higher dark matter masses where gamma-ray satellites lose sensitivity. The last point implies that Planck can weigh in on the PAMELA/AMS cosmic-ray positron excess. In models where the dark matter annihilation cross section during the recombination epoch is the same as today, the mass and cross section range that can explain the excess is excluded by Planck. Thus, the new results make it even more difficult to interpret the positron anomaly as a signal of dark matter.

by Jester (noreply@blogger.com) at March 15, 2015 04:47 PM

Jester - Resonaances

B-modes: what's next

The signal of gravitational waves from inflation is the holy grail of cosmology. As is well known, at the end of a quest for the holy grail there is always the Taunting Frenchman....  This is also the fate of the BICEP quest for primordial B-mode polarization imprinted in the Cosmic Microwave Background by the gravitational waves.  We've already known, since many months, that the high intensity of the galactic dust foreground does not allow BICEP2 to unequivocally detect the primordial B-mode signal. The only open question was how strong limits on the parameter r - the tensor-to-scalar ratio of primordial fluctuations - can be set. This is the main result of the recent paper that combines data from the BICEP2, Keck Array, and Planck instruments. BICEP2 and Keck are orders of magnitude more sensitive than Planck to CMB polarization fluctuations. However, they made measurements only at one frequency of 150 GhZ where the CMB signal is large. Planck, on the other hand, can contribute  measurements at higher frequencies where the galactic dust dominates, which allows them to map out the foregrounds in the window observed by BICEP. Cross-correlating the Planck and BICEP maps allows one to subtract the dust component, and extract the constraints on the parameter r. The limit quoted by BICEP and Planck,  r < 0.12, is however worse than  r < 0.11 from Planck's analysis of temperature fluctuations. This still leaves a lot of room for the primordial B-mode signal hiding in the CMB.  

So the BICEP2 saga is definitely over, but the search for the primordial B-modes is not.  The lesson we learned is that single frequency instruments like BICEP2 are not good in view of large galactic foregrounds. The road ahead is then clear: build more precise multi-frequency instruments, such that foregrounds can be subtracted. While we will not send a new CMB satellite observatory anytime soon, there are literally dozens of ground based and balloon CMB experiments already running or coming online in the near future. In particular, the BICEP program continues, with Keck Array running at other frequencies, and the more precise BICEP3 telescope to be completed this year. Furthermore, the SPIDER balloon experiment just completed the first Antarctica flight early this year, with a two-frequency instrument on board. Hence, better limits on r are expected already this year. See the snapshots below, borrowed from these slides, for a compilation of upcoming experiments.




Impressive, isn't it? These experiments should be soon sensitive to r~0.01, and in the long run to r~0.001. Of course, there is no guarantee of a detection. If the energy scale of inflation is just a little below 10^16 GeV, then we will never observe the signal of gravitational waves. Thus, the success of this enterprise crucially depends on Nature being kind. However the high stakes make  these searches worthwhile. A discovery, would surely count among the greatest scientific breakthrough of 21st century. Better limits, on the other hand, will exclude some simple models of inflation.  For example, single-field inflation with a quadratic potential is already under pressure. Other interesting models, such as natural inflation, may go under the knife soon. 

For quantitative estimates of future experiments' sensitivity to r, see this paper.

by Jester (noreply@blogger.com) at March 15, 2015 04:46 PM

Life as a Physicist

Pi Day–We should do it more!

WP_20150314_003

Today was Pi day. To join in the festivities, here in Marseille, I took my kid to the Pi-day exhibit at MuCEM, the new fancy museum they built in 2013 here in Marseille. It was packed. The room was on the top floor, and it was packed with people (sorry for the poor quality of the photo, my cell phone doesn’t handle the sun pouring in the windows well!). It was full of tables with various activities all having to do with mathematics. Puzzles and games that ranged from logic to group theory. It was very well done, and the students were enthusiastic and very helpful. They really wanted nothing more than to be here on a Saturday with this huge crowd of people. For the 45 minutes we were exploring everyone seemed to be having a good time.

And when I say packed, I really do mean packed. When we left the fire marshals had arrived, and were carefully counting people. The folks (all students from nearby universities) were carefully making sure that only one person went in for everyone that went out.

Each time I go to one of these things or participate in one of these things I’m reminded how much the public likes it. The Particle Fever movie is an obvious recent really big example. It was shown over here in Marseille in a theater for the first time about 6 months ago. The theater sold out! This was not uncommon back in the USA (though sometimes smaller audiences happened as well!). The staging was genius: the creator of the movie is a fellow physicist and each time a town would do a showing, he would get in contact with some of his friends to do Q&A after the movie.

Another big one I helped put together was the Higgs announcement on July 3, 2012, in Seattle. There were some 6 of us. It started at midnight and went on till 2 am (closing time). At midnight, on a Tuesday night, there were close to 200 people there! We’d basically packed the bar. The bar had to kick us out as people were peppering us with questions as we were trying to leave before closing. It was a lot of fun for us, and it looked like a lot of fun for everyone else that attended.

I remember the planning stages for that clearly. We had contingency plans in case no one showed up. Or how to alter our presentation if there were only 5 people. I think we were opening for about 40 or so. And almost 200 showed up. I think most of us did not think the public was interested. This attitude is pretty common – why would they care about the work we do is a common theme in conversations about outreach. And it is demonstrably wrong. Smile

The lesson for people in these fields: people want to know about this stuff! And we should figure out how to do these public outreach events more often. Some cost a lot and are years in the making (e.g. the movie Particle Fever), but others are easy. For example – Science Café’s around the USA.

And in more different ways. For example, some friends of mine have come up with a neat way of looking for cosmic rays – using your cell phones (most interesting conversation on this project can be found on twitter). What a great way to get everyone involved!

And there are selfish reasons for us to do these things! A lot of funding for science comes from various governments agencies in the USA and around the world (be it local or federal), and the more of the public knows what is being done with their tax dollars, and what interesting results are being produced, the better. Sure, there are people who will never be convinced, but there are also a lot that will become even more enthusiastic.

So… what are your next plans for an outreach project?


by gordonwatts at March 15, 2015 02:23 AM

March 14, 2015

Clifford V. Johnson - Asymptotia

LA Marathon Route Panorama!
sky_spots_marathon_pano_stitch_cvj_13_march_2015(Click for much larger view.) Sunday is the 30th LA Marathon. In celebration of this, giant spotlights were set up at various points along the route (from Dodger stadium all the way out to Santa Monica... roughly a station each mile, I read somewhere) and turned on last night for about an hour between around 9 and 10. I stood on a conveniently placed rooftop and had a go at capturing this. See the picture (click for much larger view). It involved pushing the exposure by about two stops, [...] Click to continue reading this post

by Clifford at March 14, 2015 03:49 PM

Marco Frasca - The Gauge Connection

Is Higgs alone?

ResearchBlogging.org

I am back after the announcement by CERN of the restart of LHC. On May this year we will have also the first collisions. This is great news and we hope for the best and the best here is just the breaking of the Standard Model.

The Higgs in the title is not Professor Higgs but rather the particle carrying his name. The question is a recurring one since the first hints of existence made their appearance at the LHC. The point I would like to make is that the equations of the theory are always solved perturbatively, even if exact solutions exist that provide a mass also if the theory is massless or has a mass term with a wrong sign (Higgs model). All you need is a finite self-interaction term in the equation. So, you will have bad times to recover such exact solutions with perturbation techniques and one keeps on living in the ignorance. If you would like to see the technicalities involved just take a cursory look at Dispersive Wiki.

What is the point? The matter is rather simple. The classical theory has exact massive solutions for the potential in the form V(\phi)=a\phi^2+b\phi^4 and this is a general result implying that a scalar self-interacting field gets always a mass (see here and here). Are we entitled to ignore this? Of course no. But today exact solutions have lost their charm and we can get along with them.

For the quantum field theory side what could we say? The theory can be quantized starting with these solutions and I have shown that one gets in this way that these massive particles have higher excited states. These are not bound states (maybe could be correctly interpreted in string theory or in a proper technicolor formulation after bosonization) but rather internal degrees of freedom. It is always the same Higgs particle but with the capability to live in higher excited states. These states are very difficult to observe because higher excited states are also highly depressed and even more hard to see. In the first LHC run they could not be seen for sure. In a sense, it is like Higgs is alone but with the capability to get fatter and present himself in an infinite number of different ways. This is exactly the same for the formulation of the scalar field as originally proposed by Higgs, Englert, Brout, Kibble, Guralnik and Hagen. We just note that this formulation has the advantage to be exactly what one knows from second order phase transitions used by Anderson in his non-relativistic proposal of this same mechanism. The existence of these states appears inescapable whatever is your best choice for the quartic potential of the scalar field.

It is interesting to note that this is also true for the Yang-Mills field theory. The classical equations of this theory display similar solutions that are massive (see here) and whatever is the way you develop your quantum filed theory with such solutions the mass gap is there. The theory entails the existence of massive excitations exactly as the scalar field does. This have been seen in lattice computations (see here). Can we ignore them? Of course no but exact solutions are not our best choice as said above even if we will have hard time to recover them with perturbation theory. Better to wait.

Marco Frasca (2009). Exact solutions of classical scalar field equations J.Nonlin.Math.Phys.18:291-297,2011 arXiv: 0907.4053v2

Marco Frasca (2013). Scalar field theory in the strong self-interaction limit Eur. Phys. J. C (2014) 74:2929 arXiv: 1306.6530v5

Marco Frasca (2014). Exact solutions for classical Yang-Mills fields arXiv arXiv: 1409.2351v2

Biagio Lucini, & Marco Panero (2012). SU(N) gauge theories at large N Physics Reports 526 (2013) 93-163 arXiv: 1210.4997v2


Filed under: Applied Mathematics, Mathematical Physics, Particle Physics, QCD Tagged: CERN, Exact solutions of nonlinear PDEs, Exact solutions of PDEs, Higgs, Higgs mechanism, Higgs particle, LHC, Mass Gap, yang-Mills equations

by mfrasca at March 14, 2015 02:31 PM