Particle Physics Planet


October 20, 2014

astrobites - astro-ph reader's digest

Gravitational waves and the need for fast galaxy surveys

Gravitational waves are ripples in space-time that occur, for example, when two very compact celestial bodies merge. Their direct detection would allow scientists to characterize these mergers and understand the physics of systems undergoing strong gravitational interactions. Perhaps that era is not so distant; gravitational wave detectors such as advanced LIGO and Virgo are expected to come online by 2016. While this is a very exciting prospect, gravitational wave detectors have limited resolution; they can constrain the location of the source to within an area of 10 to 1000 deg2  on the sky, depending on the number of detectors and the strength of the signal.

An artist's rendering of two white dwarfs coallescing and producing gravitational wave emission.

An artist’s rendering of two white dwarfs coallescing and producing gravitational wave emission.

To understand the nature of the source of gravitational waves, scientists hope to be able to locate it more accurately by searching for its electromagnetic counterpart immediately after the gravitational wave is detected. How can telescopes help in this endeavor? The authors of this paper explore the possibility of performing very fast galaxy surveys to identify and characterize the birthplace of gravitational waves.

Gravitational waves from the merger of two neutron stars can be detected out to 200 Mpc, which is roughly 800 times the distance to the Andromeda galaxy. It is expected that LIGO-Virgo will detect about 40 of these events per year. There are roughly 8 galaxies per 1 deg2 within 200 Mpc - that is 800 candidate galaxies in an area of 100 deg2. Hence, a quick survey that would pinpoint those possible galaxy counterparts to the gravitational wave emission would be very useful. After potential hosts are identified, they could be followed-up with telescopes with smaller fields-of-view to measure the light emitted by the source of gravitational waves.

The electromagnetic emission following the gravitational wave detection only lasts for short periods of time (for a kilonova, the timescale is of approximately a week), and this drives the need for fast surveys. To devise an efficient search strategy, the authors suggest looking for galaxies with high star formation rates. It is expected that those galaxies will have higher chances of hosting a gravitational wave event. (Although they clarify that the rate of mergers of compact objects might be better correlated with the mass of the galaxy rather than its star formation activity.) A good proxy for star formation in a galaxy is the light it emits in the red H-alpha line, coming from  hydrogen atoms in clouds of gas that act as stellar nurseries. The issue is whether current telescopes can survey large areas fast enough to find a good fraction of all star forming galaxies within the detection area of LIGO-Virgo.

The authors consider a 2m-size telescope and estimate the typical observing time needed to identify a typical star forming galaxy up to a distance of 200 Mpc. This ranges from 40-80 seconds depending on the observing conditions. It would take this type of telescope a week to cover 100 deg2This result matches very well the expected duration of the visible light signal from these events! Mergers of black holes and neutron stars could be detected out to larger distances (~450 Mpc). To find possible galaxy hosts out to these distances, a 2m-class telescope would cover 30 deg2 in a week. Without a doubt, the exciting prospect of gravitational wave detection will spur more detailed searches for the best strategies to locate their sources.

by Elisa Chisari at October 20, 2014 08:44 AM

Jester - Resonaances

Weekend Plot: Bs mixing phase update
Today's featured plot was released last week by the LHCb collaboration:

It shows the CP violating phase in Bs meson mixing, denoted as φs,  versus the difference of the decay widths between the two Bs meson eigenstates. The interest in φs comes from the fact that it's  one of the precious observables that 1) is allowed by the symmetries of the Standard Model, 2) is severely suppressed due to the CKM structure of flavor violation in the Standard Model. Such observables are a great place to look for new physics (other observables in this family include Bs/Bd→μμ, K→πνν, ...). New particles, even too heavy to be produced directly at the LHC, could produce measurable contributions to φs as long as they don't respect the Standard Model flavor structure. For example, a new force carrier with a mass as large as 100-1000 TeV and order 1 flavor- and CP-violating coupling to b and s quarks would be visible given the current experimental precision. Similarly, loops of supersymmetric particles with 10 TeV masses could show up, again if the flavor structure in the superpartner sector is not aligned with that in the  Standard Model.

The phase φs can be measured in certain decays of neutral Bs mesons where the process involves an interference of direct decays and decays through oscillation into the anti-Bs meson. Several years ago measurements at Tevatron's D0 and CDF experiments suggested a large new physics contribution. The mild excess has gone away since, like many other such hints.  The latest value quoted by LHCb is φs = - 0.010 ± 0.040, which combines an earlier measurements of the Bs → J/ψ π+ π- decay and the brand new measurement of the   Bs → J/ψ K+ K- decay. The experimental precision is already comparable to the Standard Model prediction of φs = - 0.036. Further progress is still possible, as the Standard Model prediction can be computed to a few percent accuracy.  But the room for new physics here is getting tighter and tighter.

by Jester (noreply@blogger.com) at October 20, 2014 08:28 AM

Lubos Motl - string vacua and pheno

ETs, hippies, loons introduce Andrew Strominger
...or a yogi and another nude man?

Exactly one week ago, Andrew Strominger of Harvard gave a Science and Cocktails talk in Christiania – a neighborhood of Copenhagen, Denmark.



The beginning of this 64-minute lecture on "Black Holes, String Theory and the Fundamental Laws of Nature" is rather extraordinary and if you only want to see the weirdest introduction of a fresh winner of the Dirac Medal, just listen to the first three minutes of the video.




However, you will obviously be much more spiritually enriched if you continue to watch for another hour – even though some people who have seen similar popular talks by Andy may feel that some of the content is redundant and similar to what they have heard.




After the introduction, you may appreciate how serious and credible Andy's and Andy daughter's illustrations are (sorry, I can't distinguish these two artists!) in comparison with the mainstream culture in the Danish capital.

At the beginning, Andy said that it's incredible how much we already know about the Universe. We may design a space probe and land it on Mars and predict the landing within a second. We are even able to feed roast beef to Andrew Strominger and make him talk as a consequence of the food, and even predict that he would talk.

It's equally shocking when we may find something clear we don't understand – something that looks like a contradiction. Such paradoxes have been essential in the history of physics. Einstein was thinking what he would see in the mirror if he were running by the speed of light (or faster than that) and looking at his image in the mirror in front of him. Newton's and Maxwell's theories gave different answers. Einstein was bothered by that.

The puzzle was solved... there is a universal speed limit, special relativity, and all this stuff. About 6 other steps in physics are presented as resolutions to paradoxes of similar types. If we don't understand, it's not a problem: it's an opportunity.

Soon afterwards, Andy focuses on general relativity, spacetime curvature etc. The parabolic trajectories of freely falling objects are actually the straigh(est) lines in the curved spacetime. After a few words, he gets to the uncertainty principle and also emphasizes that everything has to be subject to the principle – it's not possible to give "exceptions" to anyone. And the principle has to apply to the space's geometry, too.

There is a cookbook how to "directly quantize" any theory, and this procedure is amazingly tested. If you apply the cookbook to gravity, GR, you get carbagan which is great because it's a lot of fun. ;-) He says we will need "time" to figure out whether the solution we have, string theory, is right in Nature. However, already now, some more basic Harvard courses have to be fixed by some insights from the string course.

Suddenly he mentions Hawking and Bekenstein's ideas about black holes. What do black holes have to do with these issues? They have everything to do with them, it surprisingly turns out. An introduction to black holes follows. Lots of matter, escape velocity, surpasses the speed of light – the basic logic of this introduction is identical to my basic school talk in the mountains a few months ago. ;-) The talks would remain identical even when Andy talks about the ability of Karl Schwarzschild to exactly solve Einstein's equations that Einstein considered unsolvably difficult. Einstein had doubts about the existence of the black holes for quite some time but in the 1960s, the confusion disappeared. Sgr A* is his (and my) key example of a real-world black hole.

Andy says that there's less than nothing, namely nothing nothing, inside black holes. I am not 100% sure what he actually means by that. Probably some topological issues – the Euclidean black hole has no geometry for \(r\lt r_0\) at all. OK, what happens in the quantum world? Particles tunnel out of the nothing nothing and stuff comes out as the black body radiation – at Hawking's temperature. Andy calls this single equation for the temperature "the Hawking's contribution to science" which slightly belittles Hawking and it's surely partly Andy's goal but OK.

He switches to thermodynamics, the science done by those people who were playing with water and fire and the boiling point of carbon dioxide without knowing about molecules. Ludwig Boltzmann beautifully derived those phenomenologically found laws from the assumption that matter is composed of molecules that may be traced using the probabilistic reasoning. He found the important of the entropy/information. Andy wisely presents entropy to be in the units of kilobytes or gigabytes - because that's what ordinary people sort of know today.

Andy counts the Hawking-Bekenstein entropy formula among the five most fundamental formulae in physics, and perhaps the most interesting one because we don't understand. That's a bit bizarre because whenever I was telling him about the general derivations of this formula I was working on, aside from other things, Andy would tell me that we didn't need such a derivation! ;-)

Amusingly and cleverly, he explains the holographic entropy bounds by talking about the Moore's law (thanks, Luke) that must inevitably break down at some point. Of course, in the real world, it will break down long before that... Now, he faces the tension between two pictures of black holes: something with the "nothing nothing" inside; or the most complicated (highest-entropy) objects we may have.

Around 41:00, he begins to talk about string theory, its brief history, and its picture of elementary particles. On paper, string theory is capable of unifying all the forces as well as QM with GR, and it addresses the black hole puzzle. String theory has grown by having eaten almost all the competitors (a picture of a hungry boy eating some trucks, of course). The term "string theory" is used for the big body of knowledge even today.

I think that at this point, he's explaining the Strominger-Vafa paper – and its followups – although the overly popular language makes me "slightly" uncertain about that. But soon, he switches to a much newer topic, his and his collaborators' analysis of the holographic dual of the rotating Kerr black holes.

Andy doesn't fail to mention that without seeing and absorbing the mathematics, the beauty of the story is as incomplete as someone's verbal story about his visit to the Grand Canyon whose pictures can't be seen by the recipient of the story. The equation-based description of these insights is much more beautiful for the theoretical physicists than the Grand Canyon. Hooray.

Intense applause.

Last nine minutes are dedicated to questions.

The first question is not terribly original and you could guess that. What kind of experiments can we make to decide whether string theory is correct? Andy says that the question is analogous to the question to Magellan when he's in the middle of his trip around the Earth, when will he complete the trip? We don't know what comes next.

Now, I exploded in laughter because Andy's wording of this idea almost exactly mimics what I am often saying in such contexts. "You know, the understanding of Nature isn't a five-year plan." Of course, I like to say such a thing because 1) I was sort of fighting against the planned economy and similar excesses already as a child, 2) some people, most notably Lee Smolin, openly claimed that they think that science should be done according to five-year plans. It's great that Andy sees it identically. We surely don't have a proposal for an experiment that could say Yes or No but we work with things that are accessible and not just dreamed about, Andy says, and the work on the black hole puzzle is therefore such an important part of the research.

The second question was so great that one might even conjecture that the author knew something about the answer: Why does the entropy and the bounds scale like the area and not the volume? So Andy says that the black hole doesn't really have the volume. We "can't articulate it well" – he slightly looks like he is struggling and desperately avoiding the word "holography" for reasons I don't fully understand. OK, now he said the word.

In the third question, a girl asks how someone figured out that there should be black holes. Andy says that physicists solve things in baby steps or smaller ones. Well, they first try to solve everything exactly and they usually fail. So they try to find special solutions and Schwarzschild did find one. Amazingly, it took decades to understand what the solution meant. Every wrong thing has been tried before the right thing was arrived at.

Is a black hole needed for every galaxy? Is a black hole everywhere? He thinks that it is an empirical question. Andy says that he doesn't have an educated guess himself. Astronomers tend to believe that a black hole is in every galaxy. Of course, I would say that this question depends on the definition of a galaxy. The "galaxies" without a black hole inside are probably low-density "galaxies", and one may very well say that such diluted ensembles don't deserve the name "galaxy".

In twenty years, Andy will be able to answer the question – which he wouldn't promise for the "egg or chicken first" question.

I didn't understand the last question about some character of string theory. Andy answered that string theory will be able to explain that, whatever "that" means. ;-)

Another intense applause with colorful lights. Extraterrestrial sounds conclude the talk.

by Luboš Motl (noreply@blogger.com) at October 20, 2014 06:10 AM

October 19, 2014

Christian P. Robert - xi'an's og

Shravan Vasishth at Bayes in Paris this week

Taking advantage of his visit to Paris this month, Shravan Vasishth, from University of Postdam, Germany, will give a talk at 10.30am, next Friday, October 24, at ENSAE on:

Using Bayesian Linear Mixed Models in Psycholinguistics: Some open issues

With the arrival of the probabilistic programming language Stan (and JAGS), it has become relatively easy to fit fairly complex Bayesian linear mixed models. Until now, the main tool that was available in R was lme4. I will talk about how we have fit these models in recently published work (Husain et al 2014, Hofmeister and Vasishth 2014). We are trying to develop a standard approach for fitting these models so that graduate students with minimal training in statistics can fit such models using Stan.

I will discuss some open issues that arose in the course of fitting linear mixed models. In particular, one issue is: should one assume a full variance-covariance matrix for random effects even when there is not enough data to estimate all parameters? In lme4, one often gets convergence failure or degenerate variance-covariance matrices in such cases and so one has to back off to a simpler model. But in Stan it is possible to assume vague priors on each parameter, and fit a full variance-covariance matrix for random effects. The advantage of doing this is that we faithfully express in the model how the data were generated—if there is not enough data to estimate the parameters, the posterior distribution will be dominated by the prior, and if there is enough data, we should get reasonable estimates for each parameter. Currently we fit full variance-covariance matrices, but we have been criticized for doing this. The criticism is that one should not try to fit such models when there is not enough data to estimate parameters. This position is very reasonable when using lme4; but in the Bayesian setting it does not seem to matter.


Filed under: Books, Statistics, University life Tagged: Bayesian linear mixed models., Bayesian modelling, JAGS, linear mixed models, lme4, prior domination, psycholinguistics, STAN, Universität Potsdam

by xi'an at October 19, 2014 10:14 PM

Michael Schmitt - Collider Blog

Enhanced Higgs to tau+tau- Search with Deep Learning

“Enhanced Higgs to tau+tau- Search with Deep Learning” – that is the title of a new article posted to the archive this week by Daniel Whiteson and two collaborators from the Computer Science Department at UC Irvine (1410.3469). While the title may be totally obscure to someone outside of collider physics, it caught my immediate attention because I am working on a similar project (to be released soon).

Briefly: the physics motivation comes from the need for a stronger signal for Higgs decays to τ+τ-, which are important for testing the Higgs couplings to fermions (specifically, leptons). The scalar particle with a mass of 125 GeV looks very much like the standard model Higgs boson, but tests of couplings, which are absolutely crucial, are not very precise yet. In fact, indirect constraints are stronger than direct ones at the present time. So boosting the sensitivity of the LHC data to Higgs decays to fermions is an important task.

The meat of the article concerns the comparisons of shallow artificial neural networks, which contain only one or two hidden layers, and deep artificial neural networks, which have many. Deep networks are harder to work with than shallow ones, so the question is: does one really gain anything? The answer is: yes, its like increasing your luminosity by 25%.

This case study considers final states with two oppositely-charged leptons (e or μ) and missing transverse energy. The Higgs signal must be separated from the Drell-Yan production of τ pairs, especially Z→τ+τ-, on a statistical basis. It appears that no other backgrounds (such as W pair or top pair production) were considered, so this study is a purely technical one. Nonetheless, there is plenty to be learned from it.

Whiteson, Baldi and Sadowski make a distinction between low-level variables, which include the basic kinematic observables for the leptons and jets, and the high-level variables, which include derived kinematic quantities such as invariant masses, differences in angles and pseudorapidity, sphericity, etc. I think this distinction and the way they compare the impact of the two sets is interesting.

The question is: if a sophisticated artificial neural network is able to develop complex functions of the low-level variables through training and optimization, isn’t it redundant to provide derived kinematic quantities as additional inputs? More sharply: does the neural network need “human assistance” to do its job?

The answer is clear: human assistance does help the performance of even a deep neural network with thousands of neurons and millions of events for training. Personally I am not surprised by this, because there is physics insight behind most if not all of the high-level variables — they are not just arbitrary functions of the low-level variables. So these specific functions carry physics meaning and fall somewhere between arbitrary functions of the input variables and brand new information (or features). I admit, though, that “physics meaning” is a nebulous concept and my statement is vague…

Comparison of the performance of shallow networks and deep networks, and also of low-level and high-level variables

Comparison of the performance of shallow networks and deep networks, and also of low-level and high-level variables

The authors applied state of the art techniques for this study, including optimization with respect to hyperparameters, i.e., the parameters that concern the details of the training of the neural network (learning speed, `velocity’ and network architecture). A lot of computer cycles were burnt to carry out these comparisons!

Deep neural networks might seem like an obvious way to go when trying to isolate rare signals. There are real, non-trivial stumbling blocks, however. An important one is the vanishing gradient problem. If the number of hidden nodes is large (imagine eight layers with 500 neurons each) then training by back-propagation fails because it cannot find a significantly non-zero gradient with respect to the weights and offsets of the all the neurons. If the gradient vanishes, then the neural network cannot figure out which way to evolve so that it performs well. Imagine a vast flat space with a minimum that is localized and far away. How can you figure out which way to go to get there if the region where you are is nearly perfectly flat?

The power of a neural network can be assessed on the basis of the receiver operator curve (ROC) by integrating the area beneath the curve. For particle physicists, however, the common coinage is the expected statistical significance of an hypothetical signal, so Whiteson & co translate the performance of their networks into a discovery significance defined by a number of standard deviations. Notionally, a shallow neural network working only with low-level variables would achieve a significance of 2.57σ, while adding in the high-level variables increases the significance to 3.02σ. In contrast, the deep neural networks achieve 3.16σ with low-level, and 3.37σ with all variables.

Some conclusions are obvious: deep is better than shallow. Also, adding in the high-level variables helps in both cases. (Whiteson et al. point out that the high-level variables incorporate the τ mass, which otherwise is unavailable to the neural networks.) The deep network with low-level variables is better than a shallow network with all variables, and the authors conclude that the deep artificial neural network is learning something that is not embodied in the human-inspired high-level variables. I am not convinced of this claim since it is not clear to me that the improvement is not simply due to the inadequacy of the shallow network to the task. By way of an analogy, if we needed to approximate an exponential curve by a linear one, we would not succeed unless the range was very limited; we should not be surprised if a quadratic approximation is better.

In any case, since I am working on similar things, I find this article very interesting. It is clear that the field is moving in the direction of very advanced numerical techniques, and this is one fruitful direction to go in.


by Michael Schmitt at October 19, 2014 02:19 PM

Peter Coles - In the Dark

What’s the point of conferences?

Well, here I am back in the office making a start on my extensive to-do list. Writing it, I mean. Not actually doing any of it.

It was nice to get away for a couple of weeks, to meet up with some old friends I haven’t seen for a while and also to catch up on some of the developments in my own field and other related areas. We do have pretty good seminar series here at Sussex which should in principle allow me to keep up to date with developments in my own research area, but unfortunately the timing of these events often clashes with other meetings  that I’m obliged to attend as Head of School. Escaping to a conference is a way of focussing on research for a while without interruption. At least that’s the idea.

While at the meeting, however, I was struck by a couple of things. First was that during the morning plenary lectures given by invited speakers almost everyone in the audience was spending much more time working on their laptops than listening to the talk.  This has been pretty standard at every meeting I’ve been to for the last several years. Now that everyone uses powerpoint (or equivalent) for such presentations nobody in the audience feels the need to take notes so to occupy themselves they spend the time answering emails or pottering about on Facebook. That behaviour does not depend on the quality of the talk, either. Since nobody seems to listen very much the question naturally arises as to whether the presentations have any intrinsic value at all. It often seems to me that the conference talk has turned into a kind of ritual that persists despite nobody really knowing what it’s for or how it originated. An hour is too long to talk if you really want people to listen, but we go on doing it.

The part of a conference session that’s more interesting is the discussion after each talk. Sometimes there’s a genuine discussion from which you learn something quite significant or get an idea for a new study.  There’s often also a considerable amount of posturing, preening and point-scoring which is less agreeable but in its own way I suppose fairly interesting.

At the meeting I was attending the afternoons were devoted to discussion sessions for which we split into groups. I was allocated to “Gravitation and Cosmology”; others were on “Cosmic Rays”, “Neutrino Physics and Astrophysics”, and so on. The group I was, of about 25 people, was a nice size for discussion. These sessions were generally planned around short “informal” presentations intended to stimulate discussion, but generally these presentations were about the same length as the plenary talks and also given in Powerpoint. There was discussion, but the format turned out to be less different from the morning sessions than I’d hoped for. I’m even more convinced than ever that Powerpoint presentations used in this way stifle rather than stimulate discussion and debate. The pre-prepared presentation is often used as a crutch by a speaker reluctant to adopt a more improvisatory approach that would probably be less polished but arguably more likely to generate new thoughts.

I don’t know whether the rise of Powerpoint is itself to blame for our collective unwillingness inability to find other ways of talking about science, but I’d love to try organizing a workshop or conference along lines radically different from the usual “I talk, you listen” format in which the presenter is active and the audience passive for far too long.

All this convinced me that the answer to the question “What is the point of conferences?” has very little to do with the formal  programme and more with the informal parts, especially the conversations over coffee and at dinner. Perhaps I should try arranging a conference that has nothing but dinner and coffee breaks on the schedule?


by telescoper at October 19, 2014 02:02 PM

October 18, 2014

Christian P. Robert - xi'an's og

a week in Warwick

Canadian geese, WarwickThis past week in Warwick has been quite enjoyable and profitable, from staying once again in a math house, to taking advantage of the new bike, to having several long discussions on several prospective and exciting projects, to meeting with some of the new postdocs and visitors, to attending Tony O’Hagan’s talk on “wrong models”. And then having Simo Särkkä who was visiting Warwick this week discussing his paper with me. And Chris Oates doing the same with his recent arXival with Mark Girolami and Nicolas Chopin (soon to be commented, of course!). And managing to run in dry conditions despite the heavy rains (but in pitch dark as sunrise is now quite late, with the help of a headlamp and the beauty of a countryside starry sky). I also evaluated several students’ projects, two of which led me to wonder when using RJMCMC was appropriate in comparing two models. In addition, I also eloped one evening to visit old (1977!) friends in Northern Birmingham, despite fairly dire London Midlands performances between Coventry and Birmingham New Street, the only redeeming feature being that the connecting train there was also late by one hour! (Not mentioning the weirdest taxi-driver ever on my way back, trying to get my opinion on whether or not he should have an affair… which at least kept me awake the whole trip!) Definitely looking forward my next trip there at the end of November.


Filed under: Books, Kids, Running, Statistics, University life Tagged: Birmingham, control variate, Coventry, English train, goose, London Midlands, Mark Girolami, Nicolas Chopin, particle MCMC, simulation model, taxi-driver, Tony O'Hagan, University of Warwick

by xi'an at October 18, 2014 10:14 PM

Sean Carroll - Preposterous Universe

How to Communicate on the Internet

Let’s say you want to communicate an idea X.

You would do well to simply say “X.”

Also acceptable is “X. Really, just X.”

A slightly riskier strategy, in cases where miscomprehension is especially likely, would be something like “X. This sounds a bit like A, and B, and C, but I’m not saying those. Honestly, just X.” Many people will inevitably start arguing against A, B, and C.

Under no circumstances should you say “You might think Y, but actually X.”

Equally bad, perhaps worse: “Y. Which reminds me of X, which is what I really want to say.”

For examples see the comment sections of the last couple of posts, or indeed any comment section anywhere on the internet.

It is possible these ideas may be of wider applicability in communication situations other than the internet.

(You may think this is just grumping but actually it is science!)

by Sean Carroll at October 18, 2014 04:32 PM

Peter Coles - In the Dark

Bagaglio Mancante

I should have known something would go wrong.

When my flight landed at Gatwick yesterday, I was quickly off the plane, through passport control and into the Baggage Reclaim. And there I waited. My baggage never arrived.

After almost an hour waiting in vain went to the counter and filed a missing baggage report before getting the train back to Brighton.

By then my phone battery was flat but the charger was in my lost bag so I was unable to receive the text message I was told I would get when my bag was located. This morning I had to buy another charger and when I recharged my phone I discovered the bag had arrived at London Gatwick at 0800 this morning and a Courier would call to arrange delivery.

Great, I thought. Gatwick is only 30 minutes away from Brighton so I would soon get my stuff.

Wrong. Using the online tracking system I found the bag had been sent to Heathrow and had sat there until after 2pm before being loaded onto a vehicle for delivery.

There’s nobody answering phones at the courier company so I guess I just have to wait in the flat until they decide to deliver it.

I don’t know how BA managed to lose a bag on a direct flight in the first place, but their idiotic courier has added at least half a day’s delay in returning it.

UPDATE: My bag finally arrived at 1940. It seems it was never put on the plane I flew on.

,


by telescoper at October 18, 2014 02:12 PM

October 17, 2014

The Great Beyond - Nature blog

White House suspends enhanced pathogen research
Past research made the H5N1 virus transmissible in ferrets.

Past research has made the H5N1 virus transmissible in ferrets.

Sara Reardon

As the US public frets about the recent transmission of Ebola to two Texas healthcare workers, the US government has turned an eye on dangerous viruses that could become far more widespread if they escaped from the lab. On 17 October, the White House’s Office of Science and Technology Policy (OSTP) announced a mandatory moratorium on research aimed at making pathogens more deadly, known as gain-of-function research.

Under the moratorium, government agencies will not fund research that attempts to make natural pathogens more transmissible through the air or more deadly in the body. Researchers who have already been funded to do such projects are asked to voluntarily pause work while two non-regulatory bodies, the National Science Advisory Board for Biosecurity (NSABB) and the National Research Council, assess its risks. The ban specifically mentions research that would enhance influenza, SARS and MERS. Other types of research on naturally occurring strains of these viruses would still be funded.

This is the second time that gain-of-function research has been suspended. In 2012, 39 scientists working on influenza agreed to a voluntary moratorium after the publication of two papers demonstrating that an enhanced H5N1 influenza virus could be transmitted between mammals through respiratory droplets. The publications drew a storm of controversy centered around the danger that they might give terrorists the ability to create highly effective bioweapons, or that the viruses might accidentally escape the lab. Research resumed after regulatory agencies and entities such as the World Health Organization laid out guidelines for ensuring the safety and security of flu research.

The OSTP’s moratorium, by contrast, is mandatory and affects a far broader array of viruses. “I think it’s really excellent news,” says Marc Lipsitch of Harvard University in Cambridge, Massachusetts, who has long called for more oversight of risky research. “I think it’s common sense to deliberate before you act.”

Virologist Yoshihiro Kawaoka of the University of Wisconsin Madison, who conducted one of the controversial H5N1 gain-of-function studies in an effort to determine how the flu virus could evolve to become more deadly in mammals, says he plans to “comply with the government’s directives” on those experiments that are considered gain-of-function under OSTP’s order. “I hope that the issues can be discussed openly and constructively so that important research will not be delayed indefinitely,” he says.

The NSABB, which has not met since 2012, was called back into action in July, apparently in response to a set of lab accidents at the US Centers for Disease Control and Prevention in which lab workers were exposed to anthrax and inadvertently shipped H5N1 virus without proper safety precautions. The NSABB will spend most of its next meeting on 22 October discussing gain-of-function research, and the NRC plans to hold a workshop on a date that has not yet been set. Lipsitch, who will speak at the NSABB meeting, says he plans to advocate the use of an objective risk-assessment tool to weigh the potential benefits of each research project against the probability of a lab accident and the pathogen’s contagiousness; and consider whether the knowledge gained by studying a risky pathogen could be gained in a safer way.

by Sara Reardon at October 17, 2014 10:52 PM

Emily Lakdawalla - The Planetary Society Blog

Watching Siding Spring's encounter with Mars
The nucleus of comet Siding Spring passes close by Mars on Sunday, October 19, at 18:27 UTC. Here are links to webcasts and websites that should have updates throughout the encounter.

October 17, 2014 09:11 PM

astrobites - astro-ph reader's digest

UR #16: Star Cluster Evolution

astrobitesURlogoThe undergrad research series is where we feature the research that you’re doing. If you’ve missed the previous installments, you can find them under the “Undergraduate Research” category here.

Did you finish a senior thesis this summer? Or maybe you’re just getting started on an astro research project this semester? If you, too, have been working on a project that you want to share, we want to hear from you! Think you’re up to the challenge of describing your research carefully and clearly to a broad audience, in only one paragraph? Then send us a summary of it!

You can share what you’re doing by clicking on the “Your Research” tab above (or by clicking here) and using the form provided to submit a brief (fewer than 200 words) write-up of your work. The target audience is one familiar with astrophysics but not necessarily your specific subfield, so write clearly and try to avoid jargon. Feel free to also include either a visual regarding your research or else a photo of yourself.

We look forward to hearing from you!

************

Bhawna Motwani
Indian Institute of Technology Roorkee, Uttarakhand, India

Bhawna is a final year Integrated Masters student of Physics at IIT Roorkee. This work is a part of her summer research in 2013 with Prof. Pavel Kroupa and Dr. Sambaran Banerjee at the Argelander Institut für Astronomie, Bonn, Germany.

Dynamical Evolution of Young Star Clusters

The much-debated classical scenario of star-cluster formation, first delineated by Hills (1980), suggests that the collapse of a proto-stellar core within a parent molecular gas cloud gives rise to an infant gas-embedded cluster. Subsequently, the residual gas is driven out of the cluster due to kinetic energy from stellar winds and radiation thereby diluting the gravitational cluster potential. However, pertaining to a star-formation efficiency $\epsilon$ <50% (Kroupa 2008) and slow gas-expulsion, the cluster remains fractionally bound and ultimately regains dynamical equilibrium. With the help of NBODY6 (Aarseth 1999) algorithm, we perform N-body simulations to examine the time-evolution of confinement radii ($R_f$) for various mass-fractions f of such emerging clusters. From this, we infer the cluster re-virialization times ($\tau_{vir}$) and bound-mass fractions for a range of initial cluster-mass and half-mass radii. We relate the above properties to stellar evolution and initial mass segregation in the simulation and find that primordially segregated systems virialize faster and possess a lower bound-mass fraction on account of mass loss from heavy stars and 2-body+3-body interactions whereas, stellar evolution does not exhibit significant effect. This research is the first instance where a realistic IMF (Kroupa 2001) has been utilized to perform an extended parameter scan for an N-body cluster model.

The figure depicts typical Lagrange radii $R_{f}$ evolution profile for a computed N-body model with initial mass = 3e4 M_sun and half-mass radius = 0.5 pc. From bottom to top, the curves represent mass fractions from 5% to 99% in steps of 5%. The dark-red lines represent $R_{10}$, $R_{50}$ and $R_{80}$ respectively. The delay time (time after which gas-expulsion starts) is 0.6 Myr.

The figure depicts typical Lagrange radii $R_{f}$ evolution profile for a computed N-body model with initial mass = 3e4 M_sun and half-mass radius = 0.5 pc. From bottom to top, the curves represent mass fractions from 5% to 99% in steps of 5%. The dark-red lines represent $R_{10}$, $R_{50}$ and $R_{80}$ respectively. The delay time (time after which gas-expulsion starts) is 0.6 Myr.

 

by Astrobites at October 17, 2014 09:07 PM

Emily Lakdawalla - The Planetary Society Blog

Curiosity update, sols 764-781: Work complete at Confidence Hills; puzzling arm issues
Curiosity spent a total of four weeks at Confidence Hills, feeding samples to SAM and CheMin several times. On two weekends during this period, the rover's activities were interrupted by faults with the robotic arm. Curiosity drove away from Confidence Hills on sol 780, and is ready to observe comet Siding Spring over the weekend.

October 17, 2014 05:53 PM

Sean Carroll - Preposterous Universe

Does Santa Exist?

There’s a claim out there — one that is about 95% true, as it turns out — that if you pick a Wikipedia article at random, then click on the first (non-trivial) link, and keep clicking on the first link of each subsequent article, you will end up at Philosophy. More specifically, you will end up at a loop that runs through Reality, Existence, Awareness, Consciousness, and Quality (philosophy), as well as Philosophy itself. It’s not hard to see why. These are the Big Issues, concerning the fundamental nature of the universe at a deep level. Almost any inquiry, when pressed to ever-greater levels of precision and abstraction, will get you there.

Does Santa Exist? Take, for example, the straightforward-sounding question “Does Santa Exist?” You might be tempted to say “No” and move on. (Or you might be tempted to say “Yes” and move on, I don’t know — a wide spectrum of folks seem to frequent this blog.) But even to give such a common-sensical answer is to presume some kind of theory of existence (ontology), not to mention a theory of knowledge (epistemology). So we’re allowed to ask “How do you know?” and “What do you really mean by exist?”

These are the questions that underlie an entertaining and thought-provoking new book by Eric Kaplan, called Does Santa Exist?: A Philosophical Investigation. Eric has a resume to be proud of: he is a writer on The Big Bang Theory, and has previously written for Futurama and other shows, but he is also a philosopher, currently finishing his Ph.D. from Berkeley. In the new book, he uses the Santa question as a launching point for a rewarding tour through some knotty philosophical issues. He considers not only a traditional attack on the question, using Logic and the beloved principles of reason, but sideways approaches based on Mysticism as well. (“The Buddha ought to be able to answer our questions about the universe for like ten minutes, and then tell us how to be free of suffering.”) His favorite, though, is the approach based on Comedy, which is able to embrace contradiction in a way that other approaches can’t quite bring themselves to do.

Most people tend to have a pre-existing take on the Santa question. Hence, the book trailer for Does Santa Exist? employs a uniquely appropriate method: Choose-Your-Own-Adventure. Watch and interact, and you will find the answers you seek.

by Sean Carroll at October 17, 2014 04:34 PM

CERN Bulletin

CHIS – Letter from French health insurance authorities "Assurance Maladie" and “frontalier” status

Certain members of the personnel residing in France have recently received a letter, addressed to themselves and/or their spouse, from the French health insurance authorities (Assurance Maladie) on the subject of changes in the health insurance coverage of “frontalier” workers.

 

It should be recalled that employed members of personnel (MPE) are not affected by the changes made by the French authorities to the frontalier  workers' "right to choose" (droit d'option) in matters of health insurance (see the CHIS website for more details), which took effect as of 1 June 2014, as they are not considered to be frontalier workers. Associated members of the personnel (MPA) are not affected either, unless they live in France and are employed by a Swiss institute.

For the small number of MPAs in the latter category who might be affected, as well as for family members who do have frontalier status, CERN is still in discussion with the authorities of the two Host States regarding the health insurance coverage applicable to them.

We hope to receive more information in the coming weeks and will keep you informed via the CHIS web site and the CERN Bulletin.

HR Department

October 17, 2014 04:10 PM

Christian P. Robert - xi'an's og

frankly, I did not read your papers in detail, but…

A very refreshing email from a PhD candidate from abroad:

“Franchement j’ai pas lu encore vos papiers en détails, mais j’apprécie vos axes de recherche et j’aimerai bien en faire autant  avec votre collaboration, bien sûr. Actuellement, je suis à la recherche d’un sujet de thèse et c’est pour cela que je vous écris. Je suis prêt à négocier sur tout point et de tout coté.”

[Frankly I have not yet read your papers in detail , but I appreciate your research areas and I would love to do the same with your help , of course.  Currently, I am looking for a thesis topic and this is why I write to you. I am willing to negotiate on any point and any side.]


Filed under: Kids, Statistics, University life Tagged: foreign students, PhD s, PhD topic

by xi'an at October 17, 2014 03:10 PM

The n-Category Cafe

New Evidence of the NSA Deliberately Weakening Encryption

One of the most high-profile ways in which mathematicians are implicated in mass surveillance is in the intelligence agencies’ deliberate weakening of commercially available encryption systems — the same systems that we rely on to protect ourselves from fraud, and, if we wish, to ensure our basic human privacy.

We already knew quite a lot about what they’ve been doing. The NSA’s 2013 budget request asked for funding to “insert vulnerabilities into commercial encryption systems”. Many people now know the story of the Dual Elliptic Curve pseudorandom number generator, used for online encryption, which the NSA aggressively and successfully pushed to become the industry standard, and which has weaknesses that are widely agreed by experts to be a back door. Reuters reported last year that the NSA arranged a secret $10 million contract with the influential American security company RSA (yes, that RSA), who became the most important distributor of that compromised algorithm.

In the August Notices of the AMS, longtime NSA employee Richard George tried to suggest that this was baseless innuendo. But new evidence published in The Intercept makes that even harder to believe than it already was. For instance, we now know about the top secret programme Sentry Raven, which

works with specific US commercial entities … to modify US manufactured encryption systems to make them exploitable for SIGINT [signals intelligence].

(page 9 of this 2004 NSA document).

The Intercept article begins with a dramatic NSA-drawn diagram of the hierarchy of secrecy levels. Each level is colour-coded. Top secret is red, and above top secret (these guys really give it 110%) are the “core secrets” — which, as you’d probably guess, are in black. From the article:

the NSA’s “core secrets” include the fact that the agency works with US and foreign companies to weaken their encryption systems.

(The source documents themselves are linked at the bottom of the article.)

It’s noted that there is “a long history of overt NSA involvement with American companies, especially telecommunications and technology firms”. Few of us, I imagine, would regard that as a bad thing in itself. It’s the nature of the involvement that’s worrying. The aim is not just to crack the encrypted messages of particular criminal suspects, but the wholesale compromise of all widely used encryption methods:

The description of Sentry Raven, which focuses on encryption, provides additional confirmation that American companies have helped the NSA by secretly weakening encryption products to make them vulnerable to the agency.

The documents also appear to suggest that NSA staff are planted inside American security, technology or telecomms companies without the employer’s knowledge. Chris Soghoian, principal technologist at the ACLU, notes that “As more and more communications become encrypted, the attraction for intelligence agencies of stealing an encryption key becomes irresistible … It’s such a juicy target.”

Unsurprisingly, the newly-revealed documents don’t say anything specific about the role played by mathematicians in weakening digital encryption. But they do make it that bit harder for defenders of the intelligence agencies to maintain that their cryptographic efforts are solely directed against the “bad guys” (a facile distinction, but one that gets made).

In other words, there is now extremely strong documentary evidence that the NSA and its partners make strenuous efforts to compromise, undermine, degrade and weaken all commonly-used encryption software. As the Reuters article puts it:

The RSA deal shows one way the NSA carried out what Snowden’s documents describe as a key strategy for enhancing surveillance: the systematic erosion of security tools.

The more or less explicit aim is that no human being is able to send a message to any other human being that the NSA cannot read.

Let that sink in for a while. There is less hyperbole than there might seem when people say that the NSA’s goal is the wholesale elimination of privacy.

This evening, I’m going to see Laura Poitras’s film Citizenfour (trailer), a documentary about Edward Snowden by one of the two journalists to whom he gave the full set of documents. But before that, I’m going to a mathematical colloquium by Trevor Wooley, Strategic Director of the Heilbronn Institute — which is the University of Bristol’s joint venture with GCHQ. I wonder how mathematicians like him, or young mathematicians now considering working for the NSA or GCHQ, feel about the prospect of a world where it is impossible for human beings to communicate in private.

by leinster (tom.leinster@ed.ac.uk) at October 17, 2014 03:05 PM

arXiv blog

Urban "Fingerprints" Finally Reveal the Similarities (and Differences) Between American and European Cities

Travelers have long noticed that some American cities “feel” more European than others. Now physicists have discovered a way to measure the “fingerprint” of a city that captures this sense.

October 17, 2014 03:05 PM

Lubos Motl - string vacua and pheno

Lorentz violation: zero or 10 million times smaller than previously thought
One of the research paradigms that I consider insanely overrated is the idea that the fundamental theory of Nature may break the Lorentz symmetry – the symmetry underlying the special theory of relativity – and that the theorist may pretty much ignore the requirement that the symmetry should be preserved.

The Super-Kamiokande collaboration has published a new test of the Lorentz violation that used over a decade of observations of atmospheric neutrinos:
Test of Lorentz Invariance with Atmospheric Neutrinos
The Lorentz-violating terms whose existence they were trying to discover are some bilinear terms modifying the oscillations of the three neutrino species, \(\nu_e,\nu_\mu,\nu_\tau\), by treating the temporal and spatial directions of the spacetime differently.




They haven't found any evidence that these coefficients are nonzero which allowed them to impose new upper bounds. Some of them, in some parameterization, are 10 million times more constraining than the previous best upper bounds!




I don't want to annoy you with some technical details of this good piece of work because I am not terribly interested in it myself, being more sure about the result than about any other experiment by a wealthy enough particle-physics-like collaboration. But I can't resist to reiterate a general point.

The people who are playing with would-be fundamental theories that don't preserve the Lorentz invariance exactly (like most of the "alternatives" of string theory meant to describe quantum gravity) must hope that "the bad things almost exactly cancel" so that the resulting effective theory is almost exactly Lorentz-preserving which is needed for the agreement with the Super-Kamiokande search – as well as a century of less accurate experiments in different sectors of physics.

But in the absence of an argument why the resulting effective theory should be almost exactly Lorentz-preserving, one must assume that it's not and that the Lorentz-violating coefficients are pretty much uniformly distributed in a certain interval.

Before this new paper, they were allowed to be between \(0\) and a small number \(\epsilon\) and if one assumes that they were nonzero, there was no theoretical reason to think that the value was much smaller than \(\epsilon\). But a new observation shows that the new value of \(\epsilon\) is 10 million times smaller than the previous one. The Lorentz-breaking theories just don't have any explanation for this strikingly accurate observation, so they should be disfavored.

The simplest estimate what happens with the "Lorentz symmetry is slightly broken" theories is, of course, that their probability has decreased 10 million times when this paper was published! Needless to say, it's not the first time when the plausibility of such theories has dramatically decreased. But even if this were the first observation, it should mean that one lines up 10,000,001 likes of Lee Smolins who are promoting similar theories and kills 10,000,000 of them.

(OK, their names don't have to be "Lee Smolin". Using millions of his fans would be pretty much OK with me. The point is that the research into these possibilities should substantially decrease.)

Because nothing remotely similar to this sensible procedure is taking place, it seems to me that too many people just don't care about the empirical data at all. They don't care about the mathematical cohesiveness of the theories, either. Both the data and the mathematics seem to unambiguously imply that the Lorentz symmetry of the fundamental laws of Nature is exact and a theory that isn't shown to exactly preserve this symmetry – or to be a super-tiny deformation of an exactly Lorentz-preserving theory – is just ruled out.

Most of the time, they hide their complete denial of this kind of experiment behind would-be fancy words. General relativity always breaks the Lorentz symmetry because the spacetime is curved, and so on. But this breaking is spontaneous and there are still several extremely important ways how the Lorentz symmetry underlying the original laws of physics constrains all phenomena in the spacetime whether it is curved or not. The Lorentz symmetry still has to hold "locally", in small regions that always resemble regions of a flat Minkowski space, it it must also hold in "large regions" that resemble the flat space if the objects inside (which may be even black holes, highly curved objects) may be represented as local disturbances inside a flat spacetime.

One may misunderstand the previous sentences – or pretend that he misunderstands the previous sentences – but it is still a fact that a fundamentally Lorentz-violating theory makes a prediction (at least a rough, qualitative prediction) about experiments such as the experiment in this paper and this prediction clearly disagrees with the observations.

By the way, few days ago, Super-Kamiokande published another paper with limits, those for the proton lifetime (in PRD). Here the improvement is small, if any, and theories naturally giving these long lifetimes obviously exist and still seem "most natural". But yes, I also think that the theories with a totally stable proton may also exist and should be considered.

by Luboš Motl (noreply@blogger.com) at October 17, 2014 02:50 PM

CERN Bulletin

Computer Security: Our life in symbiosis*

Do you recall our Bulletin articles on control system cyber-security (“Hacking control systems, switching lights off!” and “Hacking control systems, switching... accelerators off?”) from early 2013? Let me shed some light on this issue from a completely different perspective.

 

I was raised in Europe during the 80s. With all the conveniences of a modern city, my environment made me a cyborg - a human entangled with technology - supported but also dependent on software and hardware. Since my childhood, I have eaten food packaged by machines and shipped through a sophisticated network of ships and lorries, keeping it fresh or frozen until it arrives in supermarkets. I heat my house with the magic of nuclear energy provided to me via a complicated electrical network. In fact, many of the amenities and gadgets I use are based on electricity and I just need to tap a power socket. When on vacation, I travel by taxi, train and airplane. And I enjoy the beautiful weather outside thanks to the air conditioning system located in the basement of the CERN IT building.

This air conditioning system, a process control system (PCS), monitors the ambient room temperature through a distributed network of sensors. A smart central unit - the Programmable Logic Controller (PLC) - compares the measured temperature values with a set of thresholds and subsequently calculates a new setting for heating or cooling. On top of this temperature control loop (monitor - calculate - set), a small display (a simple SCADA (supervisory controls and data acquisition) system) attached to the wall allows me to read the current room temperature and to manipulate its set-points. Depending on the size of the building and the number of processes controlled, many (different) sensors, PLCs, actuators and SCADA systems can be combined and inter-connected to build a larger and more complex PCS.

In a similar way, all our commodities and amenities depend on many different, complex PCSs e.g. a PCS for water and waste management, for electricity production and transmission, for public and private transport, for communication, for production of oil and gas but also cars, food, and pharmaceuticals. Today, many people live in symbiosis with those PCSs which make their lives cosy and comfortable, and industry depends on them. The variety of PCSs has become a piece of “critical infrastructure”, providing the fundamental basis for their general survival.

So what would happen if part or all of this critical infrastructure failed? How would your life change without clean tap water and proper waste disposal, without electricity, without fresh and frozen food? The cool air in the lecture hall will get hot and become uncomfortable. On a wider scale, with no drinking water from the tap, we would have to go back to local wells or collect and heat rain water in order to purify it. Failure of the electricity system would halt public life: frozen goods in supermarkets would warm up and become inedible, fuel pumps would not work anymore, life-preservation systems in hospitals would stop once the local diesel generators ran out of fuel…  (this is nicely depicted in the novel “Blackout” by M. Elsberg).

We rely on our critical infrastructure, we rely on PCS and we rely on the technologies behind PCSs. In the past, PCSs, PLCs and SCADA systems and their hardware and software components were proprietary, custom-built, and stand-alone. Expertise was centralised with a few system engineers who knew their system by heart. That has changed in recent decades. Pressure for consolidation and cost-effectiveness has pushed manufacturers to open up. Today, modern PCSs employ the same technological means that have been used for years in computer centres, in offices and at home: Microsoft’s Windows operating system to run SCADA systems; web browser as user interfaces; laptops and tablets replacing paper checklists; emails to disseminate status information and alerts; the IP protocol to communicate among different parts of a PCS; the Internet for remote access for support personnel and experts...

Unfortunately, while benefitting from standard information technology, PCSs have also inherited its drawbacks: design flaws in hardware, bugs in software components and applications, and vulnerabilities in communication protocols. Exploiting these drawbacks, malicious cyber-attackers and benign IT researchers have probed many different hardware, software and protocols for many years. Today, computer centres, office systems and home computers are permanently under attack. With their new technological basis, PCSs underwent scrutiny, too. The sophisticated “Stuxnet” attack by the US and Israel against the control system of Iranian uranium enrichment facilities in 2010 is just one of the more publicised cases. New vulnerabilities affecting PCSs are regularly published on certain web pages, and recipes for malicious attacks circulate widely on the Internet. The damage caused may be enormous.

Therefore, “Critical Infrastructure Protection” (CIP) becomes a must. But protecting PCSs like computer centres, patching them, running anti-virus on them, and controlling their access is much more difficult than attacking. PCS are built for use-cases. Malicious abuse is rarely considered during their design and implementation phase. For example, rebooting a SCADA PC will temporarily cease monitoring capabilities while updating PLCs firmware usually requires thorough re-testing and probably even re-certification. Both are non-trivial and costly tasks that cannot be done in-line with the monthly patch cycle releases by firms like Microsoft.

Ergo, a fraction (if not many) of today’s PCSs are vulnerable to common cyber-attacks. Not without reason, the former advisor to the US president, Richard Clarke, said “that the US might be able to blow up a nuclear plant somewhere, or a terrorist training centre somewhere, but a number of countries could strike back with a cyber-attack and the entire [US] economic system could be crashed in retaliation … because we can’t defend it today.” (AP 2011) We need to raise our cyber-defences now. Without CIP, without protected SCADA systems, our modern symbiotic life is at risk.

*To be published in the annual yearbook of the World Federation of Scientists.


Check out our website for further information, answers to your questions and help, or e-mail Computer.Security@cern.ch.

If you want to learn more about computer security incidents and issues at CERN, just follow our Monthly Report.


Access the entire collection of Computer Security articles here.

October 17, 2014 02:10 PM

ZapperZ - Physics and Physicists

Iranian Physicist Omid Kokabee To Receive A New Trial
This type of prosecution used to happen in the iron-fisted rule of the Soviet Union. But there is a sign of optimism in the case of physicist Omid Kokabee as the Iranian Supreme Court ordered a new trial. This after Kokabee has spent 4 years in prison for a charge that many in the world considered to be flimsy at best.

"Acceptance of the retrial request means that the top judicial authority has deemed Dr. Omid Kokabee's [initial] verdict against the law," Kokabee's lawyer, Saeed Khalili was quoted as saying on the website of the International Campaign for Human Rights in Iran. "The path has been paved for a retrial in his case, and God willing, proving his innocence."

Kokabee, a citizen of Iran who at the time was studying at the University of Texas, Austin, was first arrested at the Tehran airport in January 2011. After spending 15 months in prison waiting for a trial, including more than a month in solitary confinement, he was convicted by Iran's Revolutionary Court of "communicating with a hostile government" and receiving "illegitimate funds" in the form of his college loans. He was sentenced to ten years in prison without ever talking to his lawyer or being allowed testimony in his defense.

He received stipends as part of his graduate assistantship that was considered to be "illegitimate funds", which is utterly ridiculous. My characterization of such an accusation is that this can only come out of a bunch of extremely stupid and moronic group of people. There, I've said it!

Zz.

by ZapperZ (noreply@blogger.com) at October 17, 2014 01:41 PM

Symmetrybreaking - Fermilab/SLAC

High schoolers try high-powered physics

The winners of CERN's Beam Line for Schools competition conducted research at Europe’s largest physics laboratory.

Many teenagers dream about getting the keys to their first car. Last month, a group of high schoolers got access to their first beam of accelerated particles at CERN.

As part of its 60th anniversary celebration, CERN invited high school students from around the world to submit proposals for how they would use a beam of particles at the laboratory. Of the 292 teams that submitted the required “tweet of intent,” 1000-word proposal and one-minute video, CERN chose not one but two groups of winners: one from Dominicus College in Nijmegen, the Netherlands, and another from the Varvakios Pilot School in Athens, Greece.

The teams travelled to Switzerland in early September.

“Just being at CERN was fantastic,” says Nijmegen student Lisa Biesot. “The people at CERN were very enthusiastic that we were there. They helped us very much, and we all worked together.”

The Beam Line for Schools project was the brainchild of CERN physicist Christoph Rembser, who also coordinated the project. He and others at CERN didn’t originally plan for more than one team to win. But it made sense, as the two groups easily merged their experiments: Dominicus College students constructed a calorimeter that was placed within the Varvakios Pilot School’s experiment, which studied one of the four fundamental forces, the weak force.

“These two strong experiments fit so well together, and having an international collaboration, just like what we have at CERN, was great,” says Kristin Kaltenhauser of CERN’s international relations office, who worked with the students.

Over the summer the Nijmegen team grew crystals from potassium dihydrogen phosphate, a technique not used before at CERN, to make their own calorimeter, a piece of equipment that measures the energy of different particles.

At CERN, the unified team cross-calibrated the Nijmegen calorimeter with a calorimeter at CERN.

“We were worried if it would work,” says Nijmegen teacher Rachel Crane. “But then we tested our calorimeter on the beam with a lot of particles—positrons, electrons, pions and muons—and we really saw the difference. That was really amazing.”

The Athens team modeled their proposal on one of CERN’s iconic early experiments, conducted at the laboratory's first accelerator in 1958 to study an aspect of the weak force, which powers the thermonuclear reactions that cause the sun to shine.

Whereas the 1958 experiment had used a beam made completely of particles called pions, the students’ experiment used a higher energy beam containing a mixture of pions, kaons, protons, electrons and muons. They are currently analyzing the data.

CERN physicists Saime Gurbuz and Cenk Yildiz, who assisted the two teams, say they and other CERN scientists were very impressed with the students. “They were like real physicists,” Gurbuz says. “They were  professional and eager to take data and analyze it.”

The students and their teachers agree that working together enriched both their science and their overall experience. “We were one team,” says Athens student Nikolas Plaskovitis. “The collaboration was great and added so much to the experiment.” 

The students, teachers and CERN scientists have stayed in touch since the trip.

Before Nijmegen student Olaf Leender started working on the proposal, he was already interested in science, he says. “Now after my visit to CERN and this awesome experience, I am definitely going to study physics.”

Andreas Valadakis, who teaches the Athens group, says that his students now serve as science mentors to their fellow students. “This experience was beyond what we imagined,” he says.

Plaskovitis agrees with his teacher. “When we ran the beam line at CERN, just a few meters away behind the wall was the weak force at work. Just like the sun. And we were right there next to it.” 

Kaltenhauser says that CERN plans to hold another Beam Line for Schools competition in the future.

 

Like what you see? Sign up for a free subscription to symmetry!

by Rich Blaustein at October 17, 2014 01:27 PM

The n-Category Cafe

'Competing Foundations?' Conference

FINAL CFP and EXTENDED DEADLINE: SoTFoM II `Competing Foundations?’, 12-13 January 2015, London.

The focus of this conference is on different approaches to the foundations of mathematics. The interaction between set-theoretic and category-theoretic foundations has had significant philosophical impact, and represents a shift in attitudes towards the philosophy of mathematics. This conference will bring together leading scholars in these areas to showcase contemporary philosophical research on different approaches to the foundations of mathematics. To accomplish this, the conference has the following general aims and objectives. First, to bring to a wider philosophical audience the different approaches that one can take to the foundations of mathematics. Second, to elucidate the pressing issues of meaning and truth that turn on these different approaches. And third, to address philosophical questions concerning the need for a foundation of mathematics, and whether or not either of these approaches can provide the necessary foundation.

Date and Venue: 12-13 January 2015 - Birkbeck College, University of London.

Confirmed Speakers: Sy David Friedman (Kurt Goedel Research Center, Vienna), Victoria Gitman (CUNY), James Ladyman (Bristol), Toby Meadows (Aberdeen).

Call for Papers: We welcome submissions from scholars (in particular, young scholars, i.e. early career researchers or post-graduate students) on any area of the foundations of mathematics (broadly construed). While we welcome submissions from all areas concerned with foundations, particularly desired are submissions that address the role of and compare different foundational approaches. Applicants should prepare an extended abstract (maximum 1,500 words) for blind review, and send it to sotfom [at] gmail [dot] com, with subject `SOTFOM II Submission’.

Submission Deadline: 31 October 2014

Notification of Acceptance: Late November 2014

Scientific Committee: Philip Welch (University of Bristol), Sy-David Friedman (Kurt Goedel Research Center), Ian Rumfitt (University of Birmigham), Carolin Antos-Kuby (Kurt Goedel Research Center), John Wigglesworth (London School of Economics), Claudio Ternullo (Kurt Goedel Research Center), Neil Barton (Birkbeck College), Chris Scambler (Birkbeck College), Jonathan Payne (Institute of Philosophy), Andrea Sereni (Universita Vita-Salute S. Raffaele), Giorgio Venturi (CLE, Universidade Estadual de Campinas)

Organisers: Sy-David Friedman (Kurt Goedel Research Center), John Wigglesworth (London School of Economics), Claudio Ternullo (Kurt Goedel Research Center), Neil Barton (Birkbeck College), Carolin Antos-Kuby (Kurt Goedel Research Center)

Conference Website: sotfom [dot] wordpress [dot] com

Further Inquiries: please contact Carolin Antos-Kuby (carolin [dot] antos-kuby [at] univie [dot] ac [dot] at) Neil Barton (bartonna [at] gmail [dot] com) Claudio Ternullo (ternulc7 [at] univie [dot] ac [dot] at) John Wigglesworth (jmwigglesworth [at] gmail [dot] com)

The conference is generously supported by the Mind Association, the Institute of Philosophy, British Logic Colloquium, and Birkbeck College.

by david (d.corfield@kent.ac.uk) at October 17, 2014 01:13 PM

CERN Bulletin

Emilio Picasso (1927-2014)

Many people in the high-energy physics community will be deeply saddened to learn that Emilio Picasso passed away on Sunday 12 October after a long illness. His name is closely linked in particular with the construction of CERN’s Large Electron-Positron (LEP) collider.

 

Emilio studied physics at the University of Genoa. He came to CERN in 1964 as a research associate to work on the ‘g-2’ experiments, which he was to lead when he became a staff member in 1966. These experiments spanned two decades at two different muon storage rings and became famous for their precision studies of the muon and tests of quantum electrodynamics.

In 1979, Emilio became responsible for the coordination of work by several institutes, including CERN, on the design and construction of superconducting RF cavities for LEP. Then, in 1981, the Director-General, Herwig Schopper, appointed him as a CERN director and LEP project leader. Emilio immediately set up a management board of the best experts at CERN and together they went on to lead the construction of LEP, the world’s largest electron synchrotron, in the 27-km tunnel that now houses the LHC.

LEP came online just over 25 years ago on 14 July 1989 and ran for 11 years. Its experiments went on to perform high-precision tests of the Standard Model, a true testament to Emilio’s skills as a physicist and as a project leader.

We send our deepest condolences to his wife and family.


A full obituary will appear in a later edition of the Bulletin.

See also the CERN Courier, in which Emilio talks about the early days of the LEP project and its start-up.

October 17, 2014 01:10 PM

CERN Bulletin

UK school visit: Alfriston School for girls

Pupils with learning disabilities from Alfriston School in the UK visited the CMS detector last week. This visit was funded by the UK's Science and Technologies Facilities Council (STFC) as part of a grant awarded to support activities that will help to build the girls’ self-esteem and interest in physics.

 

Alfriston School students at CMS.

On Friday, 10 October, pupils from Alfriston School – a UK secondary school catering for girls with a wide range of special educational needs and disabilities – paid a special visit to CERN.

Dave Waterman, a science teacher at the school, recently received a Public Engagement Small Award from the STFC, which enabled the group of girls and accompanying teachers to travel to Switzerland and visit CERN. The awards form part of a project to boost the girls’ confidence and interest in physics. The aim is to create enthusiastic role models with first-hand experience of science who can inspire their peers back home.

By building pupils' self-esteem with regards to learning science, the project further aims to encourage students to develop the confidence to go on to study subjects related to science or engineering when they leave school.

Waterman first visited CERN as part of the UK Teachers Programme in December 2013, which was when the idea of bringing his pupils over for a visit was first suggested. "The main challenge with a visit of this kind is finding how to engage the pupils who don’t have much knowledge of maths," said Waterman. Dave Barney, a member of the CMS collaboration, rose to the challenge, hitting the level spot on with a short and engaging introductory talk just before the detector visit. Chemical-engineering student Olivia Bailey, who recently completed a year-long placement at CERN, accompanied the students on the visit. "Being involved in this outreach project was really fun," she said. "It was a great way of using my experience at CERN and sharing it with others."

For one pupil – Laura – this was her first journey out of England and her first time on a plane. "The whole trip has been so exciting," she said. "My highlight was seeing the detector because it was so much bigger than what I thought." Other students were similarly impressed, expressing surprise and awe as they entered the detector area.

October 17, 2014 01:10 PM

CERN Bulletin

CERN Library | Arthur I. Miller presents "Colliding worlds: How Cutting-Edge Science Is Redefining Contemporary Art" | 21 October
In recent decades, an exciting new art movement has emerged in which artists illuminate the latest advances in science.   Some of their provocative creations - a live rabbit implanted with the fluorescent gene of a jellyfish, a gigantic glass-and-chrome sculpture of the Big Bang itself - can be seen in traditional art museums and magazines, while others are being made by leading designers at Pixar, Google's Creative Lab and the MIT Media Lab. Arthur I. Miller takes readers on a wild journey to explore this new frontier. From the movement's origins a century ago - when Einstein shaped Cubism and X-rays affected fine photography - to the latest discoveries of biotechnology, cosmology and quantum physics, Miller shows how today's artists and designers are producing work at the cutting edge of science. Tuesday, 21 October 2014 at 14:30 in the Library, Bldg. 52 1-052 https://indico.cern.ch/event/346299/ *Coffee will be served from 2 p.m.* "Colliding Worlds: How Cutting-Edge Science Is Redefining Contemporary Art", by Arthur I. Miller, W. W. Norton, 2014, ISBN 9780393083361 Publisher's information: http://books.wwnorton.com/books/Colliding-Worlds/

October 17, 2014 01:04 PM

Peter Coles - In the Dark

Arrivederci L’Aquila!

So here I am, then. In the British Airways Lounge at Roma Fiumicino Airport waiting for a flight back to Gatport Airwick. This morning’s bus journey from L’Aquila was as incident-free as the outbound journey, and I actually got to the airport about 10 minutes early. As I always do I planned the journey so I’d arrive in plenty of time for my flight, so now I get to relax and drink free wine among the Business Class types until I’m called to totter to the gate.

Fiumicino is strange airport, clearly built in the 1960s with the intention that it should look futuristic but with the inevitable result that it now feels incredibly dated, like a 1950s Science Fiction film.

Anyway, I’ve at last got a bit of time to kill so I’ll take the opportunity to brush up on my Italian. Let’s try translating this:

gusto

It’s obvious of course. House of Wind.

Ciao Ciao


by telescoper at October 17, 2014 11:47 AM

Clifford V. Johnson - Asymptotia

Sunday Assembly – Origin Stories
Sorry about the slow posting this week. It has been rather a busy time the last several days, with all sorts of deadlines and other things taking up lots of time. This includes things like being part of a shooting of a new TV show, writing and giving a midterm to my graduate electromagnetism class, preparing a bunch of documents for my own once-every-3-years evaluation (almost forgot to do that one until the last day!), and so on and so forth. Well, the other thing I forgot to do is announce that I'll be doing the local Sunday Assembly sermon (for want of a better word) this coming Sunday. I've just taken a step aside from writing it to tell you about it. You'll have maybe heard of Sunday Assembly since it has been featured a lot in the news as a secular alternative (or supplement) to a Sunday Church gathering, in many cities around the world (more here). Instead of a sermon they have someone come along and talk about a topic, and they cover a lot of interesting topics. They sound like a great bunch of people to hang out with, and I strongly [..] Click to continue reading this post

by Clifford at October 17, 2014 12:24 AM

October 16, 2014

John Baez - Azimuth

Network Theory Seminar (Part 2)

 

This time I explain more about how ‘cospans’ represent gadgets with two ends, an input end and an output end:

I describe how to glue such gadgets together by composing cospans. We compose cospans using a category-theoretic construction called a ‘pushout’, so I also explain pushouts. At the end, I explain how this gives us a category where the morphisms are electrical circuits made of resistors, and sketch what we’ll do next: study the behavior of these circuits.

These lecture notes provide extra details:

Network theory (part 31).


by John Baez at October 16, 2014 08:59 PM

The Great Beyond - Nature blog

Doctor bets against traditional Chinese medicine
Beijing

The Beijing University of Chinese Medicine is one institution where the government promotes the practice.

BUCM

A sceptic of traditional Chinese medicine is challenging practitioners of the age-old craft to prove themselves by putting his own money on the line. One has accepted the challenge. At stake is the claim that practitioners can discern whether a woman is pregnant by her pulse.

Traditional Chinese medicine (TCM) is a point of contention in China. Although the government is keen to promote its use in the clinic and, in modernized form, as part of drug discovery, some feel that much of it is unproven and that the government is throwing its money away. There have also been high-profile cases of fraud linked to such research, and the practice is criticized for its dependence on endangered species such as the Saiga antelope (Saiga tatarica).

Ah Bao, the online nickname of a burn-care doctor at Beijing Jishuitan hospital, has been an adamant critic of TCM on Chinese social media, often referring to it as “fake”. He issued the challenge on 13 September, and Zhen Yang, a practitioner at the Beijing University of Traditional Medicine, took him up on it.

Ah Bao put up 50,000 yuan (more than US$8,000), and at his urging others have donated more than 50,000 yuan, making the prize worth more than 100,000 yuan total. Ah Bao turned down Nature‘s request to be interviewed, saying that he has been overwhelmed by media attention.

Yang will have to assess with 80% accuracy whether women are pregnant. The two are reportedly working out the terms of the contest, with a tentative set-up reportedly involving 32 women who would be separated by a screen from Yang.

by David Cyranoski at October 16, 2014 06:59 PM

Peter Coles - In the Dark

Dark Matter from the Sun?

This afternoon while I was struggling to pay attention during one of the presentations at the conference I’m at, when I noticed a potentially interesting story going around on Twitter. A little bit of research revealed that it relates to a paper on the arXiv, with the title Potential solar axion signatures in X-ray observations with the XMM-Newton observatory by Fraser et al. The first author of this paper was George Fraser of the University of Leicester who died the day after it was submitted to Monthly Notices of the Royal Astronomical Society. The paper has now been accepted and the final version has appeared on the arXiv in advance of its publication on Monday. The Guardian has already run a story on it.

This is the abstract:

The soft X-ray flux produced by solar axions in the Earth’s magnetic field is evaluated in the context of ESA’s XMM-Newton observatory. Recent calculations of the scattering of axion-conversion X-rays suggest that the sunward magnetosphere could be an observable source of 0.2-10 keV photons. For XMM-Newton, any conversion X-ray intensity will be seasonally modulated by virtue of the changing visibility of the sunward magnetic field region. A simple model of the geomagnetic field is combined with the ephemeris of XMM-Newton to predict the seasonal variation of the conversion X-ray intensity. This model is compared with stacked XMM-Newton blank sky datasets from which point sources have been systematically removed. Remarkably, a seasonally varying X-ray background signal is observed. The EPIC count rates are in the ratio of their X-ray grasps, indicating a non-instrumental, external photon origin, with significances of 11(pn), 4(MOS1) and 5(MOS2) sigma. After examining the constituent observations spatially, temporally and in terms of the cosmic X-ray background, we conclude that this variable signal is consistent with the conversion of solar axions in the Earth’s magnetic field. The spectrum is consistent with a solar axion spectrum dominated by bremsstrahlung- and Compton-like processes, i.e. axion-electron coupling dominates over axion-photon coupling and the peak of the axion spectrum is below 1 keV. A value of 2.2e-22 /GeV is derived for the product of the axion-photon and axion-electron coupling constants, for an axion mass in the micro-eV range. Comparisons with limits derived from white dwarf cooling may not be applicable, as these refer to axions in the 0.01 eV range. Preliminary results are given of a search for axion-conversion X-ray lines, in particular the predicted features due to silicon, sulphur and iron in the solar core, and the 14.4 keV transition line from 57Fe.

The paper concerns a hypothetical particle called the axion and I see someone has already edited the Wikipedia page to mention this new result. The idea of the axion has been around since the 1970s, when its existence was posited to solve a problem with quantum chromodynamics, but it was later realised that if it had a mass in the correct range it could be a candidate for the (cold) dark matter implied to exist by cosmological observations. Unlike many other candidates for cold dark matter, which experience only weak interactions, the axion feels the electromagnetic interaction, despite not carrying an electromagnetic charge. In particular, in a magnetic field the axion can convert into photons, leading to a number of ways of detecting the particle experimentally, none so far successful. If they exist, axions are also expected to be produced in the core of the Sun.

This particular study involved looking at 14 years of X-ray observations in which there appears to be an unexpected seasonal modulation in the observed X-ray flux which could be consistent with the conversion of axions produced by the Sun into X-ray photons as they pass through the Earth’s magnetic field. Here is a graphic I stole from the Guardian story:

axions

Conversion of axions into X-rays in the Earth’s magnetic field. Image Credit: University of Leicester

I haven’t had time to do more than just skim the paper so I can’t comment in detail; it’s 67 pages long. Obviously it’s potentially extremely exciting but the evidence that the signal is produced by axions is circumstantial and one would have to eliminate other possible causes of cyclical variation to be sure. The possibilities that spring first to mind as an alternatives to the axion hypothesis relate to the complex interaction between the solar wind and Earth’s magnetosphere. However, if the signal is produced by axions there should be characteristic features in the spectrum of the X-rays produced that would appear be very difficult to mimic. The axion hypothesis is therefore eminently testable, at least in principle, but current statistics don’t allow these tests to be performed. It’s tantalising, but if you want to ask me where I’d put my money I’m afraid I’d probably go for messy local plasma physics rather than anything more fundamental.

It seems to me that this is in some sense a similar situation to that of BICEP2: a potentially exciting discovery, which looks plausible, but with alternative (and more mundane) explanations not yet definitively ruled out. The difference is of course that this “discovery paper” has been refereed in the normal way, rather than being announced at a press-conference before being subjected to peer review…


by telescoper at October 16, 2014 06:23 PM

Lubos Motl - string vacua and pheno

An overlooked paper discovering axions gets published
What's the catch?

Sam Telfer has noticed and tweeted about a Royal Astronomic Society press release promoting today's publication (in Monthly Notices of RAS: link goes live next Monday) of a paper we should (or could) have discussed since or in March 2014 when it was sent to the arXiv – except that no one has discussed it and the paper has no followups at this moment:
Potential solar axion signatures in X-ray observations with the XMM-Newton observatory by George Fraser and 4 co-authors
The figures are at the end of the paper, after the captions. Unfortunately, Prof Fraser died in March, two weeks after this paper was sent to the arXiv. This can make the story about the discovery if it is real dramatic; alternatively, you may view it as a compassionate piece of evidence that the discovery isn't real.



Yes, this photograph of five axions was posted on the blog of the science adviser of The Big Bang Theory. It is no bazinga.

This French-English paper takes some data from XMM-Newton, X-ray Multi-Mirror Mission installed on and orbiting with ESA's Arianne 5's rocket. My understanding is that the authors more or less assume that the orientation of this X-ray telescope is "randomly changing" relatively to both the Earth and the Sun (which may be a problematic assumption but they study some details about the changing orientation, too).

With this disclaimer, they look at the amount of X-rays with energies between \(0.2\) and \(10\keV\) and notice that the flux has a rather clear seasonal dependence. The significance of these effects is claimed to be 4, 5, and 11 sigma (!!!), depending on some details. Seasonal signals are potentially clever but possibly tricky, too: recall that DAMA and (later) CoGeNT have "discovered" WIMP dark matter using the seasonal signals, too.




What is changing as a function of the season (date) is mainly the relative orientation of the Sun and the Earth. If you ignore the Sun, the Earth is just a gyroscope that rotates in the same way during the year, far away from stars etc., so seasons shouldn't matter. If you ignore the Earth, the situation should be more or less axially symmetric, although I wouldn't claim it too strongly, so there should also be no seasonal dependence.




What I want say and what is reasonable although not guaranteed is that the seasonal dependence of a signal seen from an orbiting rocket probably needs to depend both on the Sun and the Earth. Their interpretation is that axions are actually coming from the Sun, and they are later processed by the geomagnetic field.

The birth of the solar axions is either from a Compton-like process\[

e^- + \gamma \to e^- + a

\] or the (or more precisely: die) Bremsstrahlung-like process\[

e^- + Z \to e^- + Z+ a.

\] where the electrons and gauge bosons are taken from the mundane thermal havoc within the Sun's core, unless I am wrong. This axion \(a\) is created and some of those fly towards the Earth. And in the part of the geomagnetic field pointing towards the Sun, the axions \(a\) are converted to photons \(\gamma\) via the axion-to-photon conversion or the Primakoff effect (again: this process only works in the external magnetic field). The strength and relevance of the relevant geomagnetic field is season-dependent.



Their preferred picture is that there is the axion \(a\) with masses comparable to a few microelectronvolts and it couples both to electrons and photons. The product of these two coupling constants is said to be \(2.2\times 10^{-22} \GeV^{-1}\) because the authors love to repeat the word "two". Their hypothesis (or interpretation of the signal) probably makes some specific predictions about the spectrum of the X-rays and they should be checked which they have tried but I don't see too many successes of these checks after the first super-quick analysis of the paper.

There are lots of points and arguments and possible loopholes and problems over here that I don't fully understand at this point. You are invited to teach me (and us) or think loudly if you want to think about this bold claim at all.

Clearly, if the signal were real, it would be an extremely important discovery. Dark matter could be made out of these axions. The existence of axions would have far-reaching consequences not just for CP-violation in QCD but also for the scenarios within string theory, thanks to the axiverse and related paradigms.

The first news outlets that posted stories about the paper today were The Guardian, Phys.ORG, EurekAlert, and Fellowship for ET aliens.

by Luboš Motl (noreply@blogger.com) at October 16, 2014 04:47 PM

ZapperZ - Physics and Physicists

No Women Physics Nobel Prize Winner In 50 Years
This article reports on the possible reasons why there have been no Physics Nobel Prize for a woman in 50 years.

But there's also, of course, the fact that the prize is awarded to scientists whose discoveries have stood the test of time. If you're a theorist, your theory must be proven true, which knocks various people out of the running. One example is Helen Quinn, whose theory with Roberto Peccei predicts a new particle called the axion. But the axion hasn't been discovered yet, and therefore they can't win the Nobel Prize.
.
.
Age is important to note. Conrad tells Mashable that more and more women are entering the field of physics, but as a result, they're still often younger than what the committee seems to prefer. According to the Nobel Prize website, the average age of Nobel laureates has even increased since the 1950s.
 .
.
But the Nobel Prize in Physics isn't a lifetime achievement award — it honors a singular accomplishment, which can be tricky for both men and women.

"Doing Nobel Prize-worthy research is a combination of doing excellent science and also getting lucky," Conrad says. "Discoveries can only happen at a certain place and time, and you have to be lucky to be there then. These women coming into the field are as excellent as the men, and I have every reason to think they will have equal luck. So, I think in the future you will start to see lots of women among the Nobel Prize winners. I am optimistic."

The article mentioned the names of 4 women who are the leading candidates for the Nobel prize: Deborah Jin, Lene Hau, Vera Rubin, and Margaret Murnane. If you noticed, I mentioned about Jin and Hau way back when already, and I consider them to have done Nobel caliber work. I can only hope that, during my lifetime, we will see a woman win this again after so long.

Zz.

by ZapperZ (noreply@blogger.com) at October 16, 2014 12:40 PM

ZapperZ - Physics and Physicists

Lockheed Fusion "Breakthrough" - The Skeptics Are Out
Barely a day after Lockheed Martin announced their "fusion breakthrough" in designing a workable and compact fusion reactor, the skeptics are already weighing in their opinions even when details of Lockheed design has not been clearly described.

"The nuclear engineering clearly fails to be cost effective," Tom Jarboe told Business Insider in an email. Jarboe is a professor of aeronautics and astronautics, an adjunct professor in physics, and a researcher with the University of Washington's nuclear fusion experiment.
.
.
"This design has two doughnuts and a shell so it will be more than four times as bad as a tokamak," Jarboe said, adding that, "Our concept [at the University of Washington] has no coils surrounded by plasma and solves the problem."

Like I said earlier, from the sketchy detail that I've read, they are using a familiar technique for confinement, etc., something that has been used and studied extensively before. So unless they are claiming to find something that almost everyone has overlooked, this claim of their will need to be very convincing for others to accept. As stated in the article, Lockheed hasn't published anything yet, and they probably won't until they get patent approval of their design. That is what a commercial entity will typically do when they want to protect their design and investment.

There's a lot more work left to do for this to be demonstrated.

Zz.

by ZapperZ (noreply@blogger.com) at October 16, 2014 12:26 PM

Tommaso Dorigo - Scientificblogging

No Light Dark Matter In ATLAS Search
Yesterday the ATLAS collaboration published the results of a new search for dark matter particles produced in association with heavy quarks by proton-proton collisions at the CERN Large Hadron Collider. Not seeing a signal, ATLAS produced very tight upper limits on the cross section for interactions of the tentative dark matter particle with nucleons, which is the common quantity on which dark matter search results are reported. The cross section is in fact directly proportional to the rate at which one would expact to see the hypothetical particle scatter off ordinary matter, which is what one directly looks for in many of today's dark matte search experiments.

read more

by Tommaso Dorigo at October 16, 2014 10:22 AM

Emily Lakdawalla - The Planetary Society Blog

As Deadlines Loom, LightSail Bends but Doesn't Break
The Planetary Society's LightSail-A spacecraft is close to completing a final series of tests that pave the way for a possible 2015 test flight. But as deadlines loom, a new problem has sent the team scrambling to make a quick repair.

October 16, 2014 02:49 AM

ZapperZ - Physics and Physicists

Lockheed Martin Claims Fusion Breakthrough
As always, we should reserve our judgement until we get this independently verified. Still, Lockheed Martin, out of the company's Skunk Works program (which was responsible for the Stealth technology), has made the astounding claim of potentially producing a working fusion reactor by 2017.

Tom McGuire, who heads the project, told Reuters that his team had been working on fusion energy at Lockheed’s Skunk Works program for the past four years, but decided to go public with the news now to recruit additional partners in industry and government to support their work.

Last year, while speaking at Google’s Solve for X program, Charles Chase , a research scientist at Skunk Works, described Lockheed’s effort to build a trailer-sized fusion power plant that turns cheap and plentiful hydrogen (deuterium and tritium) into helium plus enough energy to power a small city.

“It’s safe, it’s clean, and Lockheed is promising an operational unit by 2017 with assembly line production to follow, enabling everything from unlimited fresh water to engines that take spacecraft to Mars in one month instead of six,” Evan Ackerman wrote in a post about Chase’s Google talk on Dvice.

The thing that I don't have very clear is on the nature of the breakthrough that would allow them to do this, because what was written in the piece about using a magnetic bottle isn't new at all. This technique has been around for decades. I even saw one in the basement of the Engineering Research building at the University of Wisconsin-Madison back in the early 80's when they were doing extensive research work in this area. So what exactly did they do that they think will be successful that others over many years couldn't?

I guess that is a trade secret for them right now and we will just have to wait for the details to trickle out later.

Zz.

by ZapperZ (noreply@blogger.com) at October 16, 2014 01:24 AM

October 15, 2014

Quantum Diaries

Let there be beam!

It’s been a little while since I’ve posted anything, but I wanted to write a bit about some of the testbeam efforts at CERN right now. In the middle of July this year, the Proton Synchrotron, or PS, the second ring of boosters/colliders which are used to get protons up to speed to collide in the LHC, saw its first beam since the shutdown at the end Run I of the LHC. In addition to providing beam to experiments like CLOUD, the beam can also be used to create secondary particles of up to 15 GeV/c momentum, which are then used for studies of future detector technology. Such a beam is called a testbeam, and all I can say is WOOT, BEAM! I must say that being able to take accelerator data is amazing!

The next biggest milestone is the testbeams from the SPS, which started on the 6th of October. This is the last ring before the LHC. If you’re unfamiliar with the process used to get protons up to the energies of the LHC, a great video can be found at the bottom of the page.

Just to be clear, test beams aren’t limited to CERN. Keep your eyes out for a post by my friend Rebecca Carney in the near future.

I was lucky enough to be part of the test beam effort of LHCb, which was testing both new technology for the VELO and for the upgrade of the TT station, called the Upstream Tracker, or UT. I worked mainly with the UT group, testing a sensor technology which will be used in the 2019 upgraded detector. I won’t go too much into the technology of the upgrade right now, but if you are interested in the nitty-gritty of it all, I will instead point you to the Technical Design Report itself.

I just wanted to take a bit to talk about my experience with the test beam in July, starting with walking into the experimental area itself. The first sight you see upon entering the building is a picture reminding you that you are entering a radiation zone.

ps_entrance

The Entrance!!

Then, as you enter, you see a large wall of radioactive concrete.

the_wall

Don’t lick those!

This is where the beam is dumped. Following along here, you get to the control room, which is where all the data taking stuff is set up outside the experimental area itself. Lots of people are always working in the control room, focused and making sure to take as much data as possible. I didn’t take their picture since they were working so hard.

Then there’s the experimental area itself.

the_setup

The Setup! To find the hardhat, look for the orange and green racks, then follow them towards the top right of the picture.

Ah, beautiful. :)

There are actually 4 setups here, but I think only three were being used at this time (click on the picture for a larger view). We occupied the area where the guy with the hardhat is.

Now the idea behind a tracker testbeam is pretty straight forward. A charged particle flies by, and many very sensitive detector planes record where the charged particle passed. These planes together form what’s called a “telescope.” The setup is completed when you add a detector to be tested either in the middle of the telescope or at one end.

Cartoon of a test beam setup. The blue indicates the "telescope", the orange is the detector under test, and the red is the trajectory of a charged particle.

Cartoon of a test beam setup. The blue indicates the “telescope”, the orange is the detector under test, and the red is the trajectory of a charged particle.

 

From timing information and from signals from these detectors, a trajectory of the particle can be determined. Now, you compare the position which your telescope gives you to the position you record in the detector you want to test, and voila, you have a way to understand the resolution and abilities of your tested detector. After that, the game is statistics. Ideally, you want to be in the middle of the telescope, so you have the information on where the charged particle passed on either side of your detector as this information gives the best resolution, but it can work if you’re on one side or the other, too.

This is the setup which we have been using for the testbeam at the PS.  We’ll be using a similar setup for the testbeam at the SPS next week! I’ll try to write a follow up post on that when we finish!

And finally, here is the promised video.

 

by Adam Davis at October 15, 2014 08:37 PM

Quantum Diaries

Top quark still raising questions

This article appeared in symmetry on Oct. 15, 2014.

Why are scientists still interested in the heaviest fundamental particle nearly 20 years after its discovery? Photo: Reidar Hahn, Fermilab

Why are scientists still interested in the heaviest fundamental particle nearly 20 years after its discovery? Photo: Reidar Hahn, Fermilab

“What happens to a quark deferred?” the poet Langston Hughes may have asked, had he been a physicist. If scientists lost interest in a particle after its discovery, much of what it could show us about the universe would remain hidden. A niche of scientists, therefore, stay dedicated to intimately understanding its properties.

Case in point: Top 2014, an annual workshop on top quark physics, recently convened in Cannes, France, to address the latest questions and scientific results surrounding the heavyweight particle discovered in 1995 (early top quark event pictured above).

Top and Higgs: a dynamic duo?
A major question addressed at the workshop, held from September 29 to October 3, was whether top quarks have a special connection with Higgs bosons. The two particles, weighing in at about 173 and 125 billion electronvolts, respectively, dwarf other fundamental particles (the bottom quark, for example, has a mass of about 4 billion electronvolts and a whole proton sits at just below 1 billion electronvolts).

Prevailing theory dictates that particles gain mass through interactions with the Higgs field, so why do top quarks interact so much more with the Higgs than do any other known particles?

Direct measurements of top-Higgs interactions depend on recording collisions that produce the two side-by-side. This hasn’t happened yet at high enough rates to be seen; these events theoretically require higher energies than the Tevatron or even the LHC’s initial run could supply. But scientists are hopeful for results from the next run at the LHC.

“We are already seeing a few tantalizing hints,” says Martijn Mulders, staff scientist at CERN. “After a year of data-taking at the higher energy, we expect to see a clear signal.” No one knows for sure until it happens, though, so Mulders and the rest of the top quark community are waiting anxiously.

A sensitive probe to new physics

Top and antitop quark production at colliders, measured very precisely, started to reveal some deviations from expected values. But in the last year, theorists have responded by calculating an unprecedented layer of mathematical corrections, which refined the expectation and promise to realigned the slightly rogue numbers.

Precision is an important, ongoing effort. If researchers aren’t able to reconcile such deviations, the logical conclusion is that the difference represents something they don’t know about — new particles, new interactions, new physics beyond the Standard Model.

The challenge of extremely precise measurements can also drive the formation of new research alliances. Earlier this year, the first Fermilab-CERN joint announcement of collaborative results set a world standard for the mass of the top quark.

Such accuracy hones methods applied to other questions in physics, too, the same way that research on W bosons, discovered in 1983, led to the methods Mulders began using to measure the top quark mass in 2005. In fact, top quark production is now so well controlled that it has become a tool itself to study detectors.

Forward-backward synergy

With the upcoming restart in 2015, the LHC will produce millions of top quarks, giving researchers troves of data to further physics. But scientists will still need to factor in the background noise and data-skewing inherent in the instruments themselves, called systematic uncertainty.

“The CDF and DZero experiments at the Tevatron are mature,” says Andreas Jung, senior postdoc at Fermilab. “It’s shut down, so the understanding of the detectors is very good, and thus the control of systematic uncertainties is also very good.”

Jung has been combing through the old data with his colleagues and publishing new results, even though the Tevatron hasn’t collided particles since 2011. The two labs combined their respective strengths to produce their joint results, but scientists still have much to learn about the top quark, and a new arsenal of tools to accomplish it.

“DZero published a paper in Nature in 2004 about the measurement of the top quark mass that was based on 22 events,” Mulders says. “And now we are working with millions of events. It’s incredible to see how things have evolved over the years.”

Troy Rummler

by Fermilab at October 15, 2014 07:04 PM

Emily Lakdawalla - The Planetary Society Blog

Finally! New Horizons has a second target
What a huge relief: there is finally a place for New Horizons to visit beyond Pluto. A team of researchers led by John Spencer has discovered three possible targets, all in the Cold Classical part of the Kuiper belt. One is particularly easy to reach. New Horizons would fly past the 30-45-kilometer object in January 2019.

October 15, 2014 06:02 PM

Peter Coles - In the Dark

Choose My Mugshot

For some time now staff and students of the School of Mathematical & Physical Sciences at the University of Sussex have complained that the picture of me on my office door is of a non-bearded person. Recently therefore I made a visit to a professional photographer so he could take a picture of the hirsute me and tried his best to make me look presentable in the process. I am now told I have to pick one of the following three shortlisted photographic representations. They all suffer from the problem that they look like me, so I have no idea what to pick. I thought I’d have a bit of a laugh and see if I can crowd-source a favourite.

Here are the contenders:

 

Photo A:  Look into my eyes Photo B: I heard that. Pardon? Photo C: Il Penseroso

Please vote here

<noscript><a href="http://polldaddy.com/poll/8376648">Take Our Poll</a></noscript>

by telescoper at October 15, 2014 05:13 PM

Axel Maas - Looking Inside the Standard Model

Challenging subtleties
I have just published a conference proceeding in which I return to an idea of how the standard model of particle physics could be extended. It is an idea I have already briefly written about: The idea is concerned with the question what would happen if there would be twice as many Higgs particles as there are in nature. The model describing this idea is therefore called 2-Higgs(-doublet)-model, or for short 2HDM. The word doublet in the official name is rather technical. It has something to do with how the second Higgs connects to the weak interaction.

As fascinating as the model itself may be, I do not want to write about its general properties. Given its popularity, you will find many things about it already on the web. No, here I want to write about what I want to learn about this theory in particular. And this is a peculiar subtlety. It connects to the research I am doing on the situation with just the single Higgs.

To understand what is going on, I have to dig deep into the theory stuff, but I will try to keep it not too technical.

The basic question is: What can we observe, and what can we not observe. One of the things a theoretician learns early on that it may be quite helpful to have some dummies. This means that he adds something in a calculation just for the sake of making the calculation simpler. Of course, she or he has to make very sure that this is not affecting the result. But if done properly, this can be of great help. The technical term for this trick is an auxiliary quantity.

Now, when we talk about the weak interactions, something amazing happens. If we assume that everything is indeed very weak, we can calculate results using so-called perturbation theory. And now an amazing thing happens: It appears, like the auxiliary quantities are real, and we can observe them. It is, and can only be, some kind of illusion. This is indeed true, something I have been working on since a long time, and others before me. It just comes out that the true thing and the auxiliary quantities have the same properties, and therefore it does not matter, which we take for our calculation. This is far from obvious, and pretty hard to explain without very much technical stuff. But since this is not the point I would like to make in this entry, let me skip these details.

That this is the case is actually a consequence of a number of 'lucky' coincidences in the standard model. Some particles have just the right mass. Some particles appear just in the right ratio of numbers. Some particles are just inert enough. Of course, as a theoretician, my experience is that there is no such thing as 'lucky'. But that is a different story (I know, I say this quite often this time).

Now, I finally return to the starting point: The 2HDM. In this theory, one can do the same kind of tricks with auxiliary quantities and perturbation theory and so on. If you assume that everything is just like in the standard model, this is fine. But is this really so? In the proceedings, I look at this question. Especially, I check whether perturbation theory should work. And what I find is: This may be possible, but it is very unlikely to happen in all the circumstances where one would like this to be true. Especially, in several scenarios in which one would like to have this property, it could indeed be failing. E.g., in some scenarios this theory could have twice as many weak gauge bosons, so-called W and Z bosons, as we see in experiment. That would be bad, as this would contradict experiment, and therefore invalidate these scenarios.

This is not the final word, of course not - proceedings are just status reports, not final answers. But that there may be, just may be, a difference. This is enough to require us (and, in this case, me) to make sure what is going on. That will be challenging. But this time such a subtly may make a huge difference.

by Axel Maas (noreply@blogger.com) at October 15, 2014 05:03 PM

Symmetrybreaking - Fermilab/SLAC

Top quark still raising questions

Why are scientists still interested in the heaviest fundamental particle nearly 20 years after its discovery?

“What happens to a quark deferred?” the poet Langston Hughes may have asked, had he been a physicist. If scientists lost interest in a particle after its discovery, much of what it could show us about the universe would remain hidden. A niche of scientists, therefore, stay dedicated to intimately understanding its properties.

Case in point: Top 2014, an annual workshop on top quark physics, recently convened in Cannes, France, to address the latest questions and scientific results surrounding the heavyweight particle discovered in 1995 (early top quark event pictured above).

Top and Higgs: a dynamic duo?

A major question addressed at the workshop, held from September 29 to October 3, was whether top quarks have a special connection with Higgs bosons. The two particles, weighing in at about 173 and 125 billion electronvolts, respectively, dwarf other fundamental particles (the bottom quark, for example, has a mass of about 4 billion electronvolts and a whole proton sits at just below 1 billion electronvolts).

Prevailing theory dictates that particles gain mass through interactions with the Higgs field, so why do top quarks interact so much more with the Higgs than do any other known particles?

Direct measurements of top-Higgs interactions depend on recording collisions that produce the two side-by-side. This hasn’t happened yet at high enough rates to be seen; these events theoretically require higher energies than the Tevatron or even the LHC’s initial run could supply. But scientists are hopeful for results from the next run at the LHC.

“We are already seeing a few tantalizing hints,” says Martijn Mulders, staff scientist at CERN. “After a year of data-taking at the higher energy, we expect to see a clear signal.” No one knows for sure until it happens, though, so Mulders and the rest of the top quark community are waiting anxiously.

A sensitive probe to new physics

Top and anti-top quark production at colliders, measured very precisely, started to reveal some deviations from expected values. But in the last year, theorists have responded by calculating an unprecedented layer of mathematical corrections, which refined the expectation and promise to realign the slightly rogue numbers.

Precision is an important, ongoing effort. If researchers aren’t able to reconcile such deviations, the logical conclusion is that the difference represents something they don’t know about—new particles, new interactions, new physics beyond the Standard Model.

The challenge of extremely precise measurements can also drive the formation of new research alliances. Earlier this year, the first Fermilab-CERN joint announcement of collaborative results set a world standard for the mass of the top quark.

Such accuracy hones methods applied to other questions in physics, too, the same way that research on W bosons, discovered in 1983, led to the methods Mulders began using to measure the top quark mass in 2005. In fact, top quark production is now so well controlled that it has become a tool itself to study detectors.

Forward-backward synergy

With the upcoming restart in 2015, the LHC will produce millions of top quarks, giving researchers troves of data to further physics. But scientists will still need to factor in the background noise and data-skewing inherent in the instruments themselves, called systematic uncertainty.

“The CDF and DZero experiments at the Tevatron are mature,” says Andreas Jung, senior postdoc at Fermilab. “It’s shut down, so the understanding of the detectors is very good, and thus the control of systematic uncertainties is also very good.”

Jung has been combing through the old data with his colleagues and publishing new results, even though the Tevatron hasn’t collided particles since 2011. The two labs combined their respective strengths to produce their joint results, but scientists still have much to learn about the top quark, and a new arsenal of tools to accomplish it.

“DZero published a paper in Nature in 2004 about the measurement of the top quark mass that was based on 22 events,” Mulders says.  “And now we are working with millions of events. It’s incredible to see how things have evolved over the years.”

 

Like what you see? Sign up for a free subscription to symmetry!

by Troy Rummler at October 15, 2014 03:48 PM

Emily Lakdawalla - The Planetary Society Blog

Field Report from Mars: Sol 3808 — October 10, 2014
Opportunity will become a comet flyby mission beginning in mid-October. The comet Siding Spring will zoom past Mars at a distance of about 135,000 km on October 19.

October 15, 2014 03:22 PM

arXiv blog

Emerging Evidence Shows How Computer Messaging Helps Autistic Adults Communicate

Anecdotal reports suggest that autistic adults benefit from computer-based communication. Now the scientific evidence is building.

October 15, 2014 03:19 PM

Lubos Motl - string vacua and pheno

A good popular text on gravitons and its limitations
In recent 24 hours, I saw a couple of news reports and popular articles about particle physics that were at least fine. For example, Physics World wrote about an experiment looking for WISP dark matter (it's like WIMP but "massive" is replaced by "sub-eV", and axions are the most famous WISPs). The Wall Street Journal wrote something about the RHIC experiment – unfortunately, the text only attracted one comment. The lack of interest in such situations is mostly due to the missing "controversy" and thanks to the technical character of the information.

But I want to mention a text by a "daily explainer" Esther Inglis-Arkell at IO9.com
What are Gravitons and Why Can't We See Them?
which is pretty good, especially if one realizes that the author doesn't seem to be trained in these issues. Before I tell you about some flaws of the article, I want to focus on what I consider good about it because that may be more important in this case.




First, the article is a product by an "explainer". Its goal is to "explain" some background. This activity is hugely missing in the "news stories" about physics, especially cutting-edge physics. Physics is like a pyramid with many floors built on top of each other and the newest discoveries almost never "rebuild" the basement. They reorganize and sometimes rebuild the top floor and maybe the floor beneath it.

Everyone who wants to understand a story about this reconstruction of the near-top floor simply has to know something about some of the floors beneath it. Unfortunately, most of the science journalists are pretending that the news stories may be directly relevant for someone who has no background, who doesn't know about simpler and more fundamental "cousins" of the latest events. It is not possible.




A related point is that this article tries to present a picture that is coherent. The storyline isn't too different from the structure of an article that an expert could construct. What is important is that it doesn't distract the readers with topics that are clearly distractions.

In particular, it tells you that the problem with the non-renormalizability of gravitons is solved by string theory, and why, without misleading "obligatory" comments about loop quantum gravity and similar "alternative" viewpoints on vaguely related matters. Most articles unfortunately try to squeeze as much randomly collected rubbish analogous to loop quantum gravity in the few paragraphs as possible so that the result is unavoidably incoherent. Almost all readers are trying to build an opinion that "combines" or "interpolates" in between all these ideas.

But if you try to "combine" valid string theory with "components" imported from some alternative "sciences", you may end up with an even more illogical pile of rubbish than if you just parrot the alternative "science" separately. Those things simply should be segregated. Even if there were some real doubts that string theory is the only framework that avoids the logical problems with the gravitons' self-interactions, and there aren't really any doubts, there should be articles that only talk about one paradigm or another – just like actual scientists aren't jumping from one framework to another every minute. And the articles shouldn't focus on the "spicy interactions" between the vastly different research directions because that's not what the good scientists are actually spending their time with, or what is needed to understand Nature.

Some people might argue that things like loop quantum gravity shouldn't be "obligatory" in similar articles because the people researching it professionally represent something like 10% of the researchers in quantum gravity, write 5% of the articles, and receive about 1% of the citations. So these are small numbers which is a reason to neglect those things. But I don't actually believe in such an approach or in such a justification of the omission. It is perfectly OK to investigate things that represent a minority of researchers, papers, and citations. What's more important is that someone thinking about physics must preserve some consistency and focus on the content – and constant distractions by sociologically flavored metaphysical debates are no good for genuine progress in physics.

OK, let me now say a few comments about the flaws of the IO9 article on gravitons.

It says that the gravitons are particles that self-interact, unlike photons and like gluons, and this leads to the non-renormalizability of classical GR, and string theory solves the problem by replacing the point-like gravitons by strings which makes the self-replication of gravitons manageable. The LHC has been and will be looking for gravitons in warped geometry scenarios in the form of missing transverse energy.

The writer offers some rather usual comments about the forces and virtual particles that mediate them. Somewhere around this paragraph I started to have some problems:
What we call "force" at the macro level seems to be conveyed by particles at the micro level. The graviton should be one of these particles. The trouble with gravitons - or, more precisely, the first of many troubles with gravitons - is that gravity isn't supposed to be a force at all. General relativity indicates that gravity is a warp in spacetime. General relativity does allow for gravitational waves, though. It's possible that these waves could come in certain precise wavelengths the way photons do, and that these can be gravitons.
The first thing that I don't like about these comments is that they suggest that there is a contradiction between "gravity is a force" and "gravity is a warp in spacetime". There is no contradiction and there has never been one. "A warp in spacetime" is a more detailed explanation how the force works, much like some biology of muscle contractions "explains" where the force of our hands comes from.

In this context, I can't resist to make a much more general remark about the laymen's logic. When you say that "a graviton is an XY" and "a graviton is a UV", they deduce that either there is a contradiction, or "UV" is the same thing as "XY". But this ain't the case. The sentences say that the "set of gravitons" is a subset of the "set of XYs" or the "set of UVs". All these sets may be different and "XY" may still be a different set than "UV" while the propositions about the subsets may still hold. "XY" and "UV" may overlap – and sometimes, one of them may be a subset of the other. Many laymen (and I don't really mean the author of the article who seems much deeper) just seem to be lacking this "basic layer of structured thinking". They seem to understand the only meaning of the phrase "A is B", namely "A is completely synonymous to a different word B". But no non-trivial science could ever be built if this were the only allowed type of "is". If it were so, science would be reduced to the translation of several pre-existing objects to different languages or dialects.

I also have problems with the last sentence of the paragraph that "gravitons could come with precise wavelengths" just like photons. In the Universe, both gravitons and photons are demonstrably allowed to have any real positive value of the wavelength (the Doppler shift arising from the change of the inertial system is the simplest way to change the wavelength of a photon or a graviton to any value you want) although particular photons and gravitons resulting from some process may have a specific wavelength (or a specific distribution of wavelengths). Moreover, she talks about gravitons although she should be logically talking about gravitational waves – which are coherent states of many gravitons in the same state, as she doesn't seem to explain at all.

In the section about gravitons and string theory, she writes that gravitons are technically "gauge bosons". It is a matter of terminology whether gravitons are gauge bosons. Conceptually, they sort of are but exactly if we add the word "technically", I think that most physicists would say that gravitons are technically not gauge bosons because the term "gauge bosons" is only used for spin-1 particles. She says lots of correct things about the spins herself, however.

Then she describes a "recursive process" of production of new photons and (using the normal experts' jargon) addition of loops to the Feynman diagrams. Things are sort of OK but at one point we learn
Although this burst of particles may get hectic, it doesn't produce an endless branching chain of photons.
It actually does. Loop diagrams with an unlimited number of loops (and virtual photons) contribute. The number of terms is infinite. The point is that one may sum this infinite collection of terms and get a finite result. And the finite result isn't really a "direct outcome" of the procedure. A priori, the (approximately) geometric series is actually divergent. However, the quotient (I guess that the right English word is the common ratio, and I will use the latter) that is greater than one (naively infinite) may be "redefined" to be a finite number smaller than one, and that's why the (approximately) geometric series converges after this process and yields a finite result.

This is the process of renormalization and the theory is renormalizable if we only need to "redefine" a finite number of "types" of divergent common ratios or objects.

(A special discussion would be needed for infrared divergences. When it comes to very low-energy photons, one literally produces an infinite number of very soft photons if two charged objects repel one another, for example. And this infinite number of photons is no illusion and is never "reduced" to any finite number. Calculations of quantities that are "truly measurable by real-world devices with their limitations" can still be done and yield finite results so the infinities encountered as infrared divergences are harmless if we are really careful about what is a measurable question and what is not.)

Concerning the renormalizability, she writes:
Because of this, photons and electron interactions are said to be renormalizable. They can get weird, but they can't become endless.
Again, they can and do become endless, but it's not a problem. It may be a good idea to mention Zeno's paradoxes such as Achilles and the turtle. Zeno believed that Achilles could never catch up with the turtle because the path to the hypothetical point where he catches up may be divided to infinitely many pieces. And Zeno was implicitly assuming that the sum of an infinite number of terms (durations) had to be divergent. That wasn't the case. Infinite series often converge.

When mathematicians started to become smarter than Zeno and his soulmates, they saw that Zeno's paradoxes weren't paradoxes at all and some of Zeno's assumptions (or hidden assumptions) were simply wrong. Similarly, when Isaac Newton and his enemy developed the calculus, they were already sure that Zeno's related paradox, "the arrow paradox", wasn't a paradox, either. Zeno used to argue that an arrow cannot move because the trajectory may be divided to infinitesimal pieces and the arrow is static in each infinitesimal subinterval. Therefore, he reasoned, the arrow must always be static. Well, of course, we know that if you divide the path to infinitely many infinitesimal pieces, you may get and you often do get a finite total distance.

In this sense, mathematicians were able to see that many previous paradoxes aren't really paradoxes. We may continue and present the renormalization as another solution to a previous would-be Zeno-like paradox. Not only it is OK if there are infinitely many terms in the expansion to Feynman diagrams. It is even OK if this expansion is naively divergent – as long as the number of the "types of divergences" that have to be redefined to a finite number is finite.

The author of the article similarly discusses the loops with many gravitons and concludes:
That huge amount of energy causes the newly-created graviton to create yet another graviton. This endless cycle of graviton production makes gravitons nonrenormalizable.
This is of course deeply misleading. As the article didn't mention (even though it claimed to discuss the gluons as well), the gluons self-interact in the same sense as gravitons do. But the gluons' self-interactions – which may also involve an arbitrary number of virtual gluons in multiloop diagrams – are renormalizable and therefore harmless because the series we have to resum is closer to a geometric series and it is enough to renormalize the naively divergent common ratio in order to tame the whole sum.

In the case of the gravitons, the common ratio is "more divergent" because the high-energy virtual gravitons have stronger interactions (gravity couples to energy and not just the constant charge) and the series is further from a geometric one because the common ratios are not so common – the ratios increase with the number of loops. That's why we face an infinite number of "types of divergences", an infinite spectrum of something that looks like a common ratio of a geometric series but it is not really common. To determine a finite result of this sum, we would need to insert an infinite amount of information to the theory – to redefine an infinite number of distinct divergent objects to finite values. And in the absence of a hierarchy that would make most of these divergences "inconsequential", this process renders the theory unpredictive because it's simply not possible to measure or otherwise determine the value of the "infinitely many different divergent integrals".

To summarize, the author has oversimplified the situation and said that the nonrenormalizability arises whenever the processes have contributions from arbitrarily complicated loop diagrams. But they always do and it is not a problem yet. The real problem of nonrenormalizability only arises if a sequence of "cures" that are analogous to Newton's cure of Zeno's arrow paradox is applied and fails, anyway.

She says that strings are extended which cures the problem and...
That bit of wiggle room keeps the creation of a graviton from being so energetic that it necessitates the creation of yet another graviton, and makes the theory renormalizable.
Well, again, string theory doesn't change anything about the fact that arbitrarily complicated multiloop diagrams contribute to the total amplitude. But strings cure the problems with nonrenormalizability. But is it quite right to say that "strings make the theory renormalizable"?

Not really. First of all, strings aren't "surgeons" that would cure a field theory. Instead, strings replace it with a different theory that isn't a quantum field theory in the conventional sense (if we're strict about its definition – and if we overlook conceptually difficult dualities that show that string theory and field theories are really inseparable and piecewise equivalent as classes of physical theories). So strings don't do anything with "the theory". Instead, they tell us to use a different theory, string theory!

Second, it isn't quite right to say that string theory (as a theory in the spacetime) is renormalizable. In fact, string theory as a theory in the spacetime is completely finite so divergences from short-distances processes never arise in the first place. So this "problem" or "disease" that arises in almost all quantum field theories doesn't arise in string theory at all – which also means that it doesn't have to be cured. (String theory changes nothing about the emergence of infrared divergences in many situations or vacua – they were real physics in quantum field theory and have to remain real physics in any valid theory going beyond field theory.) String theory still involves some calculations on the world sheet that has intermediate divergences that have to be treated and yes, the theory on the world sheet is a field theory and it is a renormalizable one. But because the dynamics of string theory in the spacetime isn't a field theory, it doesn't even make sense to ask whether it is renormalizable. The adjective "renormalizable" is only well-defined for field theories.

Finally, she talks about the possible detection of gravitons. She is aware that there are facilities such as LIGO that should look for gravitational waves but if you want to see a graviton, an individual particle, you need something as good as the LHC. I am adding something here because as I have mentioned, the article hasn't really clarified the relationship between gravitational waves and gravitons at all.

Her comments about the LHC are referring to the theories with large or warped extra dimensions that could make some effects involving gravitons observable. I think that it is "much less likely than 50%" that the LHC will observe something like that and I would surely mention this expectation in an article I would write. But there is really no rock-solid argument against these scenarios so the LHC may observe these things.

The only other complaint I have against this part of the text is that she used the word "hole" for the missing transverse energy – a potential experimental sign indicating that the gravitons were sent in the extra dimensions. I had to spend half a minute to figure out what the hole was supposed to mean – I was naturally thinking about "holes in the Dirac sea" analogous to positrons as "holes in the sea of electrons". There's some sense in which a "hole" is just fine as a word for the "missing transverse energy" but its usage is non-standard and confusing. Physicists imagine rather specific things if you say a "hole" or a "missing energy" – if you know what these phrases mean, you should appreciate how much harder it is for the laymen who are imagining "holes" in the billiard table or "missing energy" after a small breakfast, or something like that. It can't be surprising that they're often led to completely misunderstand some texts about physics even though the texts look "perfectly fine" to those who know the right meaning of the words in the context.

I've mentioned many flaws of the article but my final point is that those are unavoidable for an article that was more ambitious than the typical popular ones. And if one wanted to "grade" the article according to these flaws, he shouldn't forget about the context – and the context is that the author actually decided to write a much more technical, detailed, less superficial article than what you may see elsewhere. Writers should be encouraged to write similar things even if there are similar technical problems. They can get fixed as the community of writers and their readership get more knowledgeable about all the issues.

But if writers decide not to write anything except for superficial – and usually sociological and "spicy" – issues, there is nothing to improve and the readers won't really ever have any tools to converge to any semi-qualified let alone qualified opinions about the physics itself.

by Luboš Motl (noreply@blogger.com) at October 15, 2014 11:37 AM

The n-Category Cafe

The Atoms of the Module World

In many branches of mathematics, there is a clear notion of “atomic” or “indivisible” object. Examples are prime numbers, connected spaces, transitive group actions, and ergodic dynamical systems.

But in the world of modules, things aren’t so clear. There are at least two competing notions of “atomic” object: simple modules and, less obviously, projective indecomposable modules. Neither condition implies the other, even when the ring we’re over is a nice one, such as a finite-dimensional algebra over a field.

So it’s a wonderful fact that when we’re over a nice ring, there is a canonical bijection between <semantics>{<annotation encoding="application/x-tex">\{</annotation></semantics>isomorphism classes of simple modules<semantics>}<annotation encoding="application/x-tex">\}</annotation></semantics> and <semantics>{<annotation encoding="application/x-tex">\{</annotation></semantics>isomorphism classes of projective indecomposable modules<semantics>}<annotation encoding="application/x-tex">\}</annotation></semantics>.

Even though neither condition implies the other, modules that are “atoms” in one sense correspond one-to-one with modules that are “atoms” in the other. And the correspondence is defined in a really easy way: a simple module <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics> corresponds to a projective indecomposable module <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics> exactly when <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics> is a quotient of <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics>.

This fact is so wonderful that I had to write a short expository note on it (update — now arXived). I’ll explain the best bits here — including how it all depends on one of my favourite things in linear algebra, the eventual image.

It’s clear how the simple modules might be seen as “atomic”. They’re the nonzero modules that have no nontrivial submodules.

But what claim do the projective indecomposables have to be the “atoms” of the module world? Indecomposability, the nonexistence of a nontrivial direct summand, is a weaker condition than simplicity. And what does being projective have to do with it?

The answer comes from the Krull-Schmidt theorem. This says that over a finite enough ring <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics>, every finitely generated module is isomorphic to a finite direct sum of indecomposable modules, uniquely up to reordering and isomorphism.

In particular, we can decompose the <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics>-module <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> as a sum <semantics>P 1P n<annotation encoding="application/x-tex">P_1 \oplus \cdots \oplus P_n</annotation></semantics> of indecomposables. Now the <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics>-module <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> is projective (being free), and each <semantics>P i<annotation encoding="application/x-tex">P_i</annotation></semantics> is a direct summand of <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics>, from which it follows that each <semantics>P i<annotation encoding="application/x-tex">P_i</annotation></semantics> is projective indecomposable. We’ve therefore decomposed <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics>, uniquely up to isomorphism, as a direct sum of projective indecomposables.

But that’s not all. The Krull-Schmidt theorem also implies that every projective indecomposable <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics>-module appears on this list <semantics>P 1,,P n<annotation encoding="application/x-tex">P_1, \ldots, P_n</annotation></semantics>. That’s not immediately obvious, but you can find a proof in my note, for instance. And in this sense, the projective indecomposables are exactly the “pieces” or “atoms” of <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics>.

Here and below, I’m assuming that <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> is a finite-dimensional algebra over a field. And in case any experts are reading this, I’m using “atomic” in an entirely informal way (hence the quotation marks). Inevitably, someone has given a precise meaning to “atomic module”, but that’s not how I’m using it here.

One of the first things we learn in linear algebra is the rank-nullity formula. This says that for an endomorphism <semantics>θ<annotation encoding="application/x-tex">\theta</annotation></semantics> of a finite-dimensional vector space <semantics>V<annotation encoding="application/x-tex">V</annotation></semantics>, the dimensions of the image and kernel are complementary:

<semantics>dimimθ+dimkerθ=dimV.<annotation encoding="application/x-tex"> dim\, im \theta + dim\, ker \theta = dim V. </annotation></semantics>

Fitting’s lemma says that when you raise <semantics>θ<annotation encoding="application/x-tex">\theta</annotation></semantics> to a high enough power, the image and kernel themselves are complementary:

<semantics>imθ nkerθ n=V(n0).<annotation encoding="application/x-tex"> im \theta^n \oplus ker \theta^n = V \qquad (n \gg 0). </annotation></semantics>

I’ve written about this before, calling <semantics>imθ n<annotation encoding="application/x-tex">im \theta^n</annotation></semantics> the eventual image, <semantics>im θ<annotation encoding="application/x-tex">im^\infty \theta</annotation></semantics>, and calling <semantics>kerθ n<annotation encoding="application/x-tex">ker\theta^n</annotation></semantics> the eventual kernel, <semantics>ker θ<annotation encoding="application/x-tex">ker^\infty \theta</annotation></semantics>, for <semantics>n0<annotation encoding="application/x-tex">n \gg 0</annotation></semantics>. (They don’t change once <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics> gets high enough.) But what I hadn’t realized is that Fitting’s lemma is incredibly useful in the representation theory of finite-dimensional algebras.

For instance, Fitting’s lemma can be used to show that every projective indecomposable module is finitely generated — and indeed, cyclic (that is, generated as a module by a single element). Simple modules are cyclic too, since the submodule generated by any nonzero element must be the module itself. So, both projective indecomposable and simple modules are “small”, in the sense of being generated by a single element. In other words:

Atoms are small.

Whatever “atom” means, they should certainly be small!

But also, “atoms” shouldn’t have much internal structure. For instance, an atom shouldn’t have enough complexity that it admits lots of interesting endomorphisms. There are always going to be some, namely, multiplication by any scalar, and this means that the endomorphism ring <semantics>End(M)<annotation encoding="application/x-tex">End(M)</annotation></semantics> of a nonzero module <semantics>M<annotation encoding="application/x-tex">M</annotation></semantics> always contains a copy of the ground field <semantics>K<annotation encoding="application/x-tex">K</annotation></semantics>. But it’s a fact that when <semantics>M<annotation encoding="application/x-tex">M</annotation></semantics> is atomic in either of the two senses I’m talking about, <semantics>End(M)<annotation encoding="application/x-tex">End(M)</annotation></semantics> isn’t too much bigger than <semantics>K<annotation encoding="application/x-tex">K</annotation></semantics>.

Let me explain that first for simple modules, since that’s, well, simpler.

A basic fact about simple modules is:

Every endomorphism of a simple module is invertible or zero.

Why? Because the kernel of such an endomorphism is a submodule, so it’s either zero or the whole module. So the endomorphism is either zero or injective. But it’s a linear endomorphism of a finite-dimensional vector space, so “injective” and “surjective” and “invertible” all mean the same thing.

Assume from now on that <semantics>K<annotation encoding="application/x-tex">K</annotation></semantics> is algebraically closed. Let <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics> be a simple module and <semantics>θ<annotation encoding="application/x-tex">\theta</annotation></semantics> an endomorphism of <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics>. Then <semantics>θ<annotation encoding="application/x-tex">\theta</annotation></semantics> has an eigenvalue, <semantics>λ<annotation encoding="application/x-tex">\lambda</annotation></semantics>, say. But then <semantics>(θλid)<annotation encoding="application/x-tex">(\theta - \lambda\cdot id)</annotation></semantics> is not invertible, and must therefore be zero.

What we’ve just shown is that the only endomorphisms of a simple module are the rescalings <semantics>λid<annotation encoding="application/x-tex">\lambda\cdot id</annotation></semantics> (which are always there for any module). So <semantics>End(S)=K<annotation encoding="application/x-tex">End(S) = K</annotation></semantics>:

A simple module has as few endomorphisms as could be.

Now let’s do it for projective indecomposables. Fitting’s lemma can be used to show:

Every endomorphism of an indecomposable finitely generated module is invertible or nilpotent.

That’s easy to see: writing <semantics>M<annotation encoding="application/x-tex">M</annotation></semantics> for the module and <semantics>θ<annotation encoding="application/x-tex">\theta</annotation></semantics> for the endomorphism, we can find <semantics>n1<annotation encoding="application/x-tex">n \geq 1</annotation></semantics> such that <semantics>imθ nkerθ n=M<annotation encoding="application/x-tex">im \theta^n \oplus ker \theta^n = M</annotation></semantics>. Since <semantics>M<annotation encoding="application/x-tex">M</annotation></semantics> is indecomposable, <semantics>imθ n<annotation encoding="application/x-tex">im \theta^n</annotation></semantics> is either <semantics>0<annotation encoding="application/x-tex">0</annotation></semantics>, in which case <semantics>θ<annotation encoding="application/x-tex">\theta</annotation></semantics> is nilpotent, or <semantics>M<annotation encoding="application/x-tex">M</annotation></semantics>, in which case <semantics>θ<annotation encoding="application/x-tex">\theta</annotation></semantics> is surjective and therefore invertible. Done!

I said earlier that (by Fitting’s lemma) every projective indecomposable is finitely generated. So, every endomorphism of a projective indecomposable is invertible or nilpotent.

Let’s try to classify all the endomorphisms of a projective indecomposable module <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics>. We’re hoping there aren’t many.

Exactly the same argument as for simple modules — the one with the eigenvalues — shows that every endomorphism of a projective indecomposable module is of the form <semantics>λid+ε<annotation encoding="application/x-tex">\lambda\cdot id + \varepsilon</annotation></semantics>, where <semantics>λ<annotation encoding="application/x-tex">\lambda</annotation></semantics> is a scalar and <semantics>ε<annotation encoding="application/x-tex">\varepsilon</annotation></semantics> is a nilpotent endomorphism. So if you’re willing to regard nilpotents as negligible (and why else would I have used an <semantics>ε<annotation encoding="application/x-tex">\varepsilon</annotation></semantics>?):

A projective indecomposable module has nearly as few endomorphisms as could be.

(If you want to be more precise about it, <semantics>End(P)<annotation encoding="application/x-tex">End(P)</annotation></semantics> is a local ring with residue field <semantics>K<annotation encoding="application/x-tex">K</annotation></semantics>. All that’s left to prove here is that <semantics>End(P)<annotation encoding="application/x-tex">End(P)</annotation></semantics> is local, or equivalently that for every endomorphism <semantics>θ<annotation encoding="application/x-tex">\theta</annotation></semantics>, either <semantics>θ<annotation encoding="application/x-tex">\theta</annotation></semantics> or <semantics>idθ<annotation encoding="application/x-tex">id - \theta</annotation></semantics> is invertible. We can prove this by contradiction. If neither is invertible, both are nilpotent — and that’s impossible, since the sum of two commuting nilpotents is again nilpotent.)

So all in all, what this means is that for “atoms” in either of our two senses, there are barely more endomorphisms than the rescalings. More poetically:

Atoms have very little internal structure.

My note covers a few more things than I’ve mentioned here, but I’ll mention just one more. There is, as I’ve said, a canonical bijection between isomorphism classes of indecomposable modules and isomorphism classes of simple modules. But how big are these two sets of isomorphism classes?

The answer is that they’re finite. In other words, there are only finitely many “atoms”, in either sense.

Why? Well, I mentioned earlier that as a consequence of the Krull-Schmidt theorem, the <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics>-module <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> is a finite direct sum <semantics>P 1P n<annotation encoding="application/x-tex">P_1 \oplus \cdots \oplus P_n</annotation></semantics> of projective indecomposables, and that every projective indecomposable appears somewhere on this list (up to iso, of course). So, there are only finitely many projective indecomposables. It follows that there are only finitely many simple modules too.

An alternative argument comes in from the opposite direction. The Jordan-Hölder theorem tells us that the <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics>-module <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> has a well-defined set-with-multiplicity <semantics>S 1,,S r<annotation encoding="application/x-tex">S_1, \ldots, S_r</annotation></semantics> of composition factors, which are simple modules, and that every simple module appears somewhere on this list. So, there are only finitely many simple modules. It follows that there are only finitely many projective indecomposables too.

by leinster (tom.leinster@ed.ac.uk) at October 15, 2014 01:31 AM

October 14, 2014

Symmetrybreaking - Fermilab/SLAC

Jokes for nerds

Webcomic artist Zach Weinersmith fuels ‘Saturday Morning Breakfast Cereal’ with grad student humor and almost half of a physics degree.

Zach Weinersmith, creator of popular webcomic “Saturday Morning Breakfast Cereal,” doesn’t know all the things you think he knows—but he’s working on it.

Reading certain SMBC comics, you could be forgiven for assuming Weinersmith (his married name) possesses a deep knowledge of math, biology, psychology, mythology, philosophy, economics or physics—even if that knowledge is used in service of a not-so-academic punch line.

In reality the artist behind the brainy comic simply loves to read. “I think I’m a very slow learner,” Weinersmith says. “I just work twice as hard.”

Around 2007, before SMBC took off, Weinersmith was working in Hollywood, producing closed captioning for television programs. He was taken with a sudden desire to understand how DNA works, so he bought a stack of textbooks and started researching in his spare time.

“Before that, my comic was straight comedy,” he says. He began to inject some of what he was learning into his writing. It was a relief, he found. “It’s much harder to make funny jokes than it is to talk about things.”

That year, SMBC was recognized at the Web Cartoonists’ Choice Awards and became popular enough for Weinersmith to quit his job and write full time. But he started to get bored.

“Imagine being 25 and self-employed,” he says.

What better way to cure boredom than to pursue a degree in physics? He took a few semesters of classes at San Jose State until he realized he was stretching himself too thin.

“I have three-eighths of a physics degree,” he says, which is probably perfect. “If you say three things about a topic, people assume you know the rest of it.

“I really think there’s this sweet spot. Right when I’m learning something, I have all these hilarious ideas. Once you’re a wizened gray-beard, nothing works.”

That hasn’t soured Weinersmith on scholarship. Last year he hosted his first live event, the Festival of Bad Ad Hoc Hypotheses, in which he invites speakers to compete to give the best serious argument for a completely ridiculous idea. It was inspired by a comic arguing the evolutionary benefits of aerodynamic babies.

Weinersmith runs the festival with a panel of judges and his wife, biologist Kelly Weinersmith, whose trials and tribulations in academia inspire much of his writing.

The appeal of BAHFest can be hard to explain, he says. “People see the video [of last year’s event] and say, ‘What the hell is the audience laughing about? That was barely a joke.’”

The key, he says, is to get rid of the jokes entirely. “It’s not stand-up; it’s play-acting,” he says. “Let this thing you’re doing be the joke.”

BAHFest will take place October 19 at MIT in Boston and October 25 at the Castro Theatre in San Francisco.

Courtesy of: Zach Weinersmith

 

Like what you see? Sign up for a free subscription to symmetry!

by Kathryn Jepsen at October 14, 2014 01:00 PM

The Great Beyond - Nature blog

Australia puts science in ‘competitiveness’ drive
ian-macfarlane_0

Minister for Industry Ian Macfarlane

Australian Department of Industry

The Australian government has unveiled plans to increase the commercial return on its billions in research funding and to pump more resources into boosting industry-science links.

The government appointed ten experts — five business leaders and five leading researchers — to a ‘Commonwealth Science Council’ to advise on science priorities and to become the “pre-eminent body for advice on science and technology” in Australia, according to the ‘competitiveness agenda’ released on 14 October.

The council will be chaired by Prime Minister Abbott, with Industry Minister Ian Macfarlane as deputy chair. It will replace an existing (and some say moribund) advisory group.

The statement also says that there will be a “sharpening” of incentives for collaboration between research and industry. Five new centres to improve collaboration, and increase the competitiveness of industries including mining, oil and medical technologies, will be set up at a cost of Aus$188.5 million (US$164 million).

The Abbott government has come in for fierce criticism over its perceived lack of support for science, with many government-funded researchers and science agencies facing cut backs (see ‘Australian cuts rile researchers’). Macfarlane has previously said that the competitiveness agenda would show how the government was dealing with these concerns, by setting science at the centre of industry policy.

Australia’s chief scientist Ian Chubb said that the new council would “provide the strategic thinking and direction that a national transformation truly demands” and also welcomed an Aus$12 million investment in science education. “This is about improving the impact, focus and prioritisation of Australia’s investment in science and research,” he said in a statement.

The Australian Academy of Science also welcomed the announcements. Its secretary of science policy Les Field said in a statement: “Anything which aligns science more closely with industry has got to be a big plus, especially when this is an area where Australia traditionally struggles.”

by Daniel Cressey at October 14, 2014 11:36 AM

John Baez - Azimuth

El Niño Project (Part 8)

So far we’ve rather exhaustively studied a paper by Ludescher et al which uses climate networks for El Niño prediction. This time I’d like to compare another paper:

• Y. Berezin, Avi Gozolchiani, O. Guez and Shlomo Havlin, Stability of climate networks with time, Scientific Reports 2 (2012).

Some of the authors are the same, and the way they define climate networks is very similar. But their goal here is different: they want to see see how stable climate networks are over time. This is important, since the other paper wants to predict El Niños by changes in climate networks.

They divide the world into 9 zones:

For each zone they construct several climate networks. Each one is an array of numbers W_{l r}^y, one for each year y and each pair of grid points l, r in that zone. They call W_{l r}^y a link strength: it’s a measure of how how correlated the weather is at those two grid points during that year.

I’ll say more later about how they compute these link strengths. In Part 3 we explained one method for doing it. This paper uses a similar but subtly different method.

The paper’s first big claim is that W_{l r}^y doesn’t change much from year to year, “in complete contrast” to the pattern of local daily air temperature and pressure fluctuations. In simple terms: the strength of the correlation between weather at two different points tends to be quite stable.

Moreover, the definition of link strength involves an adjustable time delay, \tau. We can measure the correlation between the weather at point l at any given time and point r at a time \tau days later. The link strength is computed by taking a maximum over time delays \tau. Naively speaking, the value of \tau that gives the maximum correlation is “how long it typically takes for weather at point l to affect weather at point r”. Or the other way around, if \tau is negative.

This is a naive way of explaining the idea, because I’m mixing up correlation with causation. But you get the idea, I hope.

Their second big claim is that when the link strength between two points l and r is big, the value of \tau that gives the maximum correlation doesn’t change much from year to year. In simple terms: if the weather at two locations is strongly correlated, the amount of time it takes for weather at one point to reach the other point doesn’t change very much.

The data

How do Berezin et al define their climate network?

They use data obtained from here:

NCEP-DOE Reanalysis 2.

This is not exactly the same data set that Ludescher et al use, namely:

NCEP/NCAR Reanalysis 1.

“Reanalysis 2″ is a newer attempt to reanalyze and fix up the same pile of data. That’s a very interesting issue, but never mind that now!

Berezin et al use data for:

• the geopotential height for six different pressures

and

• the air temperature at those different heights

The geopotential height for some pressure says roughly how high you have to go for air to have that pressure. Click the link if you want a more precise definition! Here’s the geopotential height field for the pressure of 500 millibars on some particular day of some particular year:

The height is in meters.

Berezin et al use daily values for this data for:

• locations world-wide on a grid with a resolution of 5° × 5°,

during:

• the years from 1948 to 2006.

They divide the globe into 9 zones, and separately study each zone:

So, they’ve got twelve different functions of space and time, where space is a rectangle discretized using a 5° × 5° grid, and time is discretized in days. From each such function they build a ‘climate network’.

How do they do it?

The climate networks

Berezin’s method of defining a climate network is similar to Ludescher et al‘s, but different. Compare Part 3 if you want to think about this.

Let \tilde{S}^y_l(t) be any one of their functions, evaluated at the grid point l on day t of year y.

Let S_l^y(t) be \tilde{S}^y_l(t) minus its climatological average. For example, if t is June 1st and y is 1970, we average the temperature at location l over all June 1sts from 1948 to 2006, and subtract that from \tilde{S}^y_l(t) to get S^y_l(t). In other words:

\displaystyle{  \tilde{S}^y_l(t) = S^y_l(t) - \frac{1}{N} \sum_y S^y_l(t)  }

where N is the number of years considered.

For any function of time f, let \langle f^y(t) \rangle be the average of the function over all days in year y. This is different than the ‘running average’ used by Ludescher et al, and I can’t even be 100% sure that Berezin mean what I just said: they use the notation \langle f^y(t) \rangle.

Let l and r be two grid points, and \tau any number of days in the interval [-\tau_{\mathrm{max}}, \tau_{\mathrm{max}}]. Define the cross-covariance function at time t by:

\Big(f_l(t) - \langle f_l(t) \rangle\Big) \; \Big( f_r(t + \tau) - \langle f_r(t + \tau) \rangle \Big)

I believe Berezin mean to consider this quantity, because they mention two grid points l and r. Their notation omits the subscripts l and r so it is impossible to be completely sure what they mean! But what I wrote is the reasonable quantity to consider here, so I’ll assume this is what they meant.

They normalize this quantity and take its absolute value, forming:

\displaystyle{ X_{l r}^y(\tau) = \frac{\Big|\Big(f_l(t) - \langle f_l(t) \rangle\Big) \; \Big( f_r(t + \tau) - \langle f_r(t + \tau) \rangle \Big)\Big|}   {\sqrt{\Big\langle \Big(f_l(t)      - \langle f_l(t)\rangle \Big)^2 \Big\rangle  }  \; \sqrt{\Big\langle \Big(f_r(t+\tau) - \langle f_r(t+\tau)\rangle\Big)^2 \Big\rangle  } }  }

They then take the maximum value of X_{l r}^y(\tau) over delays \tau \in [-\tau_{\mathrm{max}}, \tau_{\mathrm{max}}], subtract its mean over delays in this range, and divide by the standard deviation. They write something like this:

\displaystyle{ W_{l r}^y = \frac{\mathrm{MAX}\Big( X_{l r}^y - \langle X_{l r}^y\rangle \Big) }{\mathrm{STD} X_{l r}^y} }

and say that the maximum, mean and standard deviation are taken over the (not written) variable \tau \in [-\tau_{\mathrm{max}}, \tau_{\mathrm{max}}].

Each number W_{l r}^y is called a link strength. For each year, the matrix of numbers W_{l r}^y where l and r range over all grid points in our zone is called a climate network.

We can think of a climate network as a weighted complete graph with the grid points l as nodes. Remember, an undirected graph is one without arrows on the edges. A complete graph is an undirected graph with one edge between any pair of nodes:

A weighted graph is an undirected graph where each edge is labelled by a number called its weight. But right now we’re also calling the weight the ‘link strength’.

A lot of what’s usually called ‘network theory’ is the study of weighted graphs. You can learn about it here:

• Ernesto Estrada, The Structure of Complex Networks: Theory and Applications, Oxford U. Press, Oxford, 2011.

Suffice it to say that given a weighted graph, there are lot of quantities you can compute from it, which are believed to tell us interesting things!

The conclusions

I will not delve into the real meat of the paper, namely what they actually do with their climate networks! The paper is free online, so you can read this yourself.

I will just quote their conclusions and show you a couple of graphs.

The conclusions touch on an issue that’s important for the network-based approach to El Niño prediction. If climate networks are ‘stable’, not changing much in time, why would we use them to predict a time-dependent phenomenon like the El Niño Southern Oscillation?

We have established the stability of the network of connections between the dynamics of climate variables (e.g. temperatures and geopotential heights) in different geographical regions. This stability stands in fierce contrast to the observed instability of the original climatological field pattern. Thus the coupling between different regions is, to a large extent, constant and predictable. The links in the climate network seem to encapsulate information that is missed in analysis of the original field.

The strength of the physical connection, W_{l r}, that each link in this network represents, changes only between 5% to 30% over time. A clear boundary between links that represent real physical dependence and links that emerge due to noise is shown to exist. The distinction is based on both the high link average strength \overline{W_{l r}} and on the low variability of time delays \mathrm{STD}(T_{l r}).

Recent studies indicate that the strength of the links in the climate network changes during the El Niño Southern Oscillation and the North Atlantic Oscillation cycles. These changes are within the standard deviation of the strength of the links found here. Indeed in Fig. 3 it is clearly seen that the coefficient of variation of links in the El Niño basin (zone 9) is larger than other regions such as zone 1. Note that even in the El Niño basin the coefficient of variation is relatively small (less than 30%).

Beside the stability of single links, also the hierarchy of the link strengths in the climate network is preserved to a large extent. We have shown that this hierarchy is partially due to the two dimensional space in which the network is embedded, and partially due to pure physical coupling processes. Moreover the contribution of each of these effects, and the level of noise was explicitly estimated. The spatial effect is typically around 50% of the observed stability, and the noise reduces the stability value by typically 5%–10%.

The network structure was further shown to be consistent across different altitudes, and a monotonic relation between the altitude distance and the correspondence between the network structures is shown to exist. This yields another indication that the observed network structure represents effects of physical coupling.

The stability of the network and the contributions of different effects were summarized in specific relation to different geographical areas, and a clear distinction between equatorial and off–equatorial areas was observed. Generally, the network structure of equatorial regions is less stable and more fluctuative.

The stability and consistence of the network structure during time and across different altitudes stands in contrast to the known unstable variability of the daily anomalies of climate variables. This contrast indicates an analogy between the behavior of nodes in the climate network and the behavior of coupled chaotic oscillators. While the fluctuations of each coupled oscillators are highly erratic and unpredictable, the interactions between the oscillators is stable and can be predicted. The possible outreach of such an analogy lies in the search for known behavior patterns of coupled chaotic oscillators in the climate system. For example, existence of phase slips in coupled chaotic oscillators is one of the fingerprints for their cooperated behavior, which is evident in each of the individual oscillators. Some abrupt changes in climate variables, for example, might be related to phase slips, and can be understood better in this context.

On the basis of our measured coefficient of variation of single links (around 15%), and the significant overall network stability of 20–40%, one may speculatively assess the extent of climate change. However, for this assessment our current available data is too short and does not include enough time from periods before the temperature trends. An assessment of the relation between the network stability and climate change might be possible mainly through launching of global climate model “experiments” realizing other climate conditions, which we indeed intend to perform.

A further future outreach of our work can be a mapping between network features (such as network motifs) and known physical processes. Such a mapping was previously shown to exist between an autonomous cluster in the climate network and El Niño. Further structures without such a climate interpretation might point towards physical coupling processes which were not observed earlier.

(I have expanded some acronyms and deleted some reference numbers.)

Finally, here two nice graphs showing the average link strength as a function of distance. The first is based on four climate networks for Zone 1, the southern half of South America:

The second is based on four climate networks for Zone 9, a big patch of the Pacific north of the Equator which roughly corresponds to the ‘El Niño basin':

As we expect, temperatures and geopotential heights get less correlated at points further away. But the rate at which the correlation drops off conveys interesting information! Graham Jones has made some interesting charts of this for the rectangle of the Pacific that Ludescher et al use for El Niño prediction, and I’ll show you those next time.

The series so far

El Niño project (part 1): basic introduction to El Niño and our project here.

El Niño project (part 2): introduction to the physics of El Niño.

El Niño project (part 3): summary of the work of Ludescher et al.

El Niño project (part 4): how Graham Jones replicated the work by Ludescher et al, using software written in R.

El Niño project (part 5): how to download R and use it to get files of climate data.

El Niño project (part 6): Steve Wenner’s statistical analysis of the work of Ludescher et al.

El Niño project (part 7): the definition of El Niño.

El Niño project (part 8): Berezin et al on the stability of climate networks.


by John Baez at October 14, 2014 12:07 AM

October 13, 2014

John Baez - Azimuth

Network Theory (Part 31)

Last time we came up with a category of labelled graphs and described circuits as ‘cospans’ in this category.

Cospans may sound scary, but they’re not. A cospan is just a diagram consisting of an object with two morphisms going into it:

We can talk about cospans in any category. A cospan is an abstract way of thinking about a ‘chunk of stuff’ \Gamma with two ‘ends’ I and O. It could be any sort of stuff: a set, a graph, an electrical circuit, a network of any kind, or even a piece of matter (in some mathematical theory of matter).

We call the object \Gamma the apex of the cospan and call the morphisms i: I \to \Gamma, o : O \to \Gamma the legs of the cospan. We sometimes call the objects I and O the feet of the cospan. We call I the input and O the output. We say the cospan goes from I to O, though the direction is just a convention: we can flip a cospan and get a cospan going the other way!

If you’re wondering about the name ‘cospan’, it’s because a span is a diagram like this:

Since a ‘span’ is another name for a bridge, and this looks like a bridge from I to O, category theorists called it a span! And category theorists use the prefix ‘co-‘ when they turn all the arrows around. Spans came first historically, and we will use those too at times. But now let’s think about how to compose cospans.

Composing cospans is supposed to be like gluing together chunks of stuff by attaching the output of the first to the input of the second. So, we say two cospans are composable if the output of the first equals the input of the second, like this:

We then compose them by forming a new cospan going all the way from X to Z:

The new object \Gamma +_Y \Gamma' and the new morphisms i'', o'' are built using a process called a ‘pushout’ which I’ll explain in a minute. The result is cospan from X to Z, called the composite of the cospans we started with. Here it is:

So how does a pushout work? It’s a general construction that you can define in any category, though it only exists if the category is somewhat nice. (Ours always will be.) You start with a diagram like this:

and you want to get a commuting diamond like this:

which is in some sense ‘the best’ given the diagram we started with. For example, suppose we’re in the category of sets and Y is a set included in both \Gamma and \Gamma'. Then we’d like A to be the union of \Gamma and \Gamma. There are other choices of A that would give a commuting diamond, but the union is the best. Something similar is happening when we compose circuits, but instead of the category of sets we’re using the category of labelled graphs we discussed last time.

How do we make precise the idea that A is ‘the best’? We consider any other potential solution to this problem, that is, some other commuting diamond:

Then A is ‘the best’ if there exists a unique morphism q from A to the ‘competitor’ Q making the whole combined diagram commute:

This property is called a universal property: instead of saying that A is the ‘best’, grownups say it is universal.

When A has this universal property we call it the pushout of the original diagram, and we may write it as \Gamma +_Y \Gamma'. Actually we should call the whole diagram

the pushout, or a pushout square, because the morphisms i'', o'' matter too. The universal property is not really a property just of A, but of the whole pushout square. But often we’ll be sloppy and call just the object A the pushout.

Puzzle 1. Suppose we have a diagram in the category of sets

where Y = \Gamma \cap \Gamma' and the maps i, o' are the inclusions of this intersection in the sets \Gamma and \Gamma'. Prove that A = \Gamma \cup \Gamma' is the pushout, or more precisely the diagram

is a pushout square, where i'', o'' are the inclusions of \Gamma and \Gamma in the union A = \Gamma \cup \Gamma'.

More generally, a pushout in the category of sets is a way of gluing together sets \Gamma and \Gamma' with some ‘overlap’ given by the maps

And this works for labelled graphs, too!

Puzzle 2. Suppose we have two circuits of resistors that are composable, like this:

and this:

These give cospans in the category L\mathrm{Graph} where

L = (0,\infty)

(Remember from last time that L\mathrm{Graph} is the category of graphs with edges labelled by elements of some set L.) Show that if we compose these cospans we get a cospan corresponding to this circuit:

If you’re a mathematician you might find it easier to solve this kind of problem in general, which requires pondering how pushouts work in L\mathrm{Graph}. Alternatively, you might find it easier to think about this particular example: then you can just check that the answer we want has the desired property of a pushout!

If this stuff seems complicated, well, just know that category theory is a very general, powerful tool and I’m teaching you just the microscopic fragment of it that we need right now. Category theory ultimately seems very simple: I can’t really think of any math that’s simpler! It only seem complicated when it’s unfamiliar and you have a fragmentary view of it.

So where are we? We know that circuits made of resistors are a special case of cospans. We know how to compose cospans. So, we know how to compose circuits… and in the last puzzle, we saw this does just what we want.

The advantage of this rather highbrow approach is that a huge amount is known about composing cospans! In particular, suppose we have any category C where pushouts exist: that is, where we can always complete any diagram like this:

to a pushout square. Then we can form a category \mathrm{Cospan}(C) where:

• an object is an object of C

• an morphisms from an object I \in C to an object O \in C is an equivalence classes of spans from I to O:

• we compose cospans in the manner just described.

Why did I say ‘equivalence class’? It’s because the pushout is not usually unique. It’s unique only up to isomorphism. So, composing cospans would be ill-defined unless we work with some kind of equivalence class of cospans.

To be precise, suppose we have two cospans from I to O:

Then a map of cospans from one to the other is a commuting diagram like this:

We say that this is an isomorphism of cospans if f is an isomorphism.

This gives our equivalence relation on cospans! It’s an old famous theorem in category theory—so famous that it’s hard to find a reference for the proof—that whenever C is a category with pushouts, there’s a category \mathrm{Cospan}(C) where:

• an object is an object of C

• an morphisms from an object I \in C to an object O \in C is an isomorphism class of spans from I to O.

• we compose isomorphism classes of cospans by picking representatives, composing them and then taking the isomorphism class.

This takes some work to prove, but it’s true, so this is how we get our category of circuits!

Next time we’ll do something with this category. Namely, we’ll cook up a category of ‘behaviors’. The behavior of a circuit made of resistors just says which currents and potentials its terminals can have. If we put a circuit in a metaphorical ‘black box’ and refuse to peek inside, all we can see is its behavior.

Then we’ll cook up a functor from the category of circuits to the category of behaviors. We’ll call this the ‘black box functor’. Saying that it’s a functor mainly means that

\blacksquare(f g) = \blacksquare(f) \blacksquare(g)

Here f and g are circuits that we can compose, and f g is their composite. The black square is the black box functor, so \blacksquare(fg) is the behavior of the circuit f g. There’s a way to compose behaviors, too, and the equation above says that the behavior of the composite circuit is the composite of their behaviors!

This is very important, because it says we can figure out what a big circuit does if we know what its pieces do. And this is one of the grand themes of network theory: understanding big complicated networks by understanding their pieces. We may not always be able to do this, in practice! But it’s something we’re always concerned with.


by John Baez at October 13, 2014 08:09 PM

ZapperZ - Physics and Physicists

A Co-Author That Never Existed?
I don't know what to make of this. One one hand, these are adults and, presumably, responsible physicists. Yet, on the other, this is the type of practical joke pulled by a juvenile.

Someone found a paper with a coauthor by the name of "Stronzo Bestiale", which, supposedly, in Italian means "Total Asshole". The author doesn't exist, the coauthor gave him/her/it an affiliation at Institute of Experimental Physics, University of Vienna. Of course, there's no one there by that name. The paper with all 3 authors, including this non-existent person, was published in the Journal of Statistical Physics back in 1987 (it took that long to discover this?).

One of the coauthors was contacted, and this is the story that was given:

At that time," he says, "we were very active in the development of a new computational technique, non-equilibrium molecular dynamics, connecting fractal geometry, irreversibility and the second law of thermodynamics. The idea was born during meetings at CECAM (Centre Européen de Calcul Atomique et Moléculaire) in Lausanne,Switzerland, and the Enrico Fermi summer school organized at Lake Como with Giovanni Ciccotti, professor of condensed matter physics at the University La Sapienza University in Rome. In these meetings, the theoretical picture of this technique was clear to me, so I wrote several papers on the subject along with some colleagues. But the reviewers of Physical Review Letters and the Journal of Statistical Physics refused to publish my texts: they contained too innovative ideas

"Meanwhile", Hoover continues, "while I was traveling on a flight to Paris, next to me were two Italian women who spoke among themselves, saying continually: "Che stronzo (what an asshole)!", "Stronzo bestiale (total asshole)". Those phrases had stuck in my mind. So, during a CECAM meeting, I asked Ciccotti what they meant. When he explained it to me, I thought that Stronzo Bestiale would have been the perfect co-author for a refused publication. So I decided to submit my papers again, simply by changing the title and adding the name of that author. And the research was published.

Let's start with the misleading title of this article. To claim that this non-existent author has "...
published research in some of the world's most esteemed physics journals,... " is a stretch by any imagination. I did a Google Scholar search on that name, and non appeared linking this person to any paper published in Nature, Science, PRL, Phys. Rev. journals, etc. And these are "some of the world's most esteemed physics journals" in anyone's book!

Secondly, I don't quite get the point in all of this. The refereeing process is focused on the content of the work, not who or what sent it in. In fact, we certainly don't want a referee to have any bias for or against an author, and so, should not pay attention on who wrote the manuscript. In fact, there is a movement to make the authors to be anonymous to the referees the same way the referees are anonymous to the authors. So inserting such a name into the authors list has no bearing, and should have no bearing on evaluating the work.

After this, I wouldn't be surprised if Journals still start to vet out the credentials of the authors submitting anything to them.

Zz.

by ZapperZ (noreply@blogger.com) at October 13, 2014 07:59 PM

Lubos Motl - string vacua and pheno

ATLAS: two Standard-Model-only Higgs decay papers
Some hours ago, the ATLAS collaboration posted two papers on its website:
Evidence for Higgs boson Yukawa couplings in the \(H \to \tau\tau\) decay mode with the ATLAS detector

Observation and measurement of Higgs boson decays to \(WW^*\) with ATLAS at the LHC
Spoilers alert. Too late. The result is that all the basic figures are found to be in almost exact agreement with the Standard Model.




The first paper involving the \(\tau\)-leptons may be interesting because it's the same decay channel for which ATLAS organized the Kaggle contest. Recall that your humble correspondent dropped from 1st place to 8th place when the "private dataset" was used at the end.



Also CERN-related: a new CERN 5-minute video about the rise of the Standard Model.

The winner, Gábor Melis, was the only one who has used the neural networks to solve the challenge. Everyone else at the top, including myself, used algorithms based on the boosted decision trees (e.g. xgboost), and this paradigm was exploited in the newly revealed paper, too. However, there are almost no details so we can't know whether the authors of the paper would have a chance to compete with us in the contest. At the end, it doesn't matter. The significance level itself has some statistical fluctuation in it, plus minus one sigma, so whether a Hungarian guy had a score better than me by 0.04 really doesn't matter in physics.




The second paper is about the \(WW^*\) decays of the Higgs boson. It's a pair of two \(W\)-bosons but, as the asterisk indicates, one of them is virtual. Again, ATLAS finds the cross section to agree with the Standard Model extremely well. I feel that it could be bad news for the hint of new physics in the \(WW\)-decays of the Higgs.

Previously, ATLAS saw an enhanced cross section if both \(W\)-bosons are on-shell. If one of them is virtual, the excess seems to go away. It may look a bit strange and may be an indication that the excess in the on-shell \(WW\)-decays was a fluke. But maybe there is a reason not to expect an excess in the \(WW^*\) analysis...

by Luboš Motl (noreply@blogger.com) at October 13, 2014 06:45 PM

astrobites - astro-ph reader's digest

Planet Formation on a Budget

We’ve discovered over a thousand exoplanets and characterized many of their properties. We’ve also discovered and studied the birthplace of planets: protoplanetary disks of gas and dust surrounding young stars. While many of the details of the planet formation process remain to be figured out, we can check if there is enough material in protoplanetary disks to form the planets that we’ve discovered. The authors of today’s post do just that, and come to some interesting conclusions about how quickly planets start to form.

Figure 1. An artists conception of a protoplanetary disk. Image from NASA JPL.

Figure 1. An artists conception of a protoplanetary disk. Image from NASA JPL.

While protoplanetary disks consist of both gas and dust, the authors only consider the budget of dust (solid material) when making their comparison. The amount of gas is more difficult to measure, and much of it is known to be lost as the disks dissipate. The authors take a mass census of the protoplanetary disks in the Taurus-Auriga complex, a giant molecular cloud and a cluster of stars that are just a few million years old. The masses of solids in protoplanetary disks are typically measured from radio or sub-mm wavelength light that is emitted from the dust in these systems. The authors focus on class II disks, which are generally regarded as the birthplace of planets. Class I disks (which precede class II) are short lived and are surrounded by envelopes of material that are still feeding the disks and protostars. Class III disks (the final stage in protoplanetary disk evolution) have already lost much of their material.

To determine the mass budget in mature planetary systems, the authors account for solid material in the form of  Earth-like and super-Earth planets, the cores of gas giant planets, and debris disks (which consist of material ranging in size from dust to km-scale planetesimals). The frequency of different types of planets is derived from the results of the Kepler mission and various radial-velocity, microlensing, and direct imaging surveys. These planet detection methods are sensitive to different orbital regions, so their results are complementary when attempting to sum the total mass of planetary systems. The frequency of debris disks is derived from surveys with the Spitzer and Herschel space telescopes, which detect the infrared light emitted from the dust grains in debris disks.

The authors use the observational results to generate a simulated population of planetary systems using a Monte Carlo technique. The distribution of the total solid mass in these simulated planetary systems is shown in Figure 2, along with the distribution of solid mass in the observed protoplanetary disks. The figure is in the form of a cumulative distribution, where the x axis notes the mass of solids and the y axis notes the fraction of systems in the population having this mass or greater.

So is there enough mass in protoplanetary disks to build the planets and debris disks that we’ve discovered? These authors conclude that there is not. You can see this in Figure 2, by comparing the green or cyan lines — which represent slightly different versions of their simulated population of mature planetary systems — with the black lines — which represents the observed distribution of class II protoplanetary disks. At less than 20 Earth-masses the black line is below the green/cyan lines, meaning there are not enough disks in this mass range to account for the observed planets. Above 20 Earth-masses, however, there are enough disks to build these planetary systems.

The authors note that their conclusion is conservative, as they likely underestimated the solid content of planetary systems in their budget. First, they note the many planets exist that cannot be detected and counted by current planet detection techniques. Planets smaller than the Earth are extremely difficult to detect by any technique, and direct imaging — the only method sensitive to planets are very large orbits — can currently only detect planets with masses of at least several times Jupiter’s mass. Second, the process of planet formation is likely inefficient, with much of the leftover mass of small solid particles being lost from the system as the protoplanetary disk disperses. Gravitational interactions among fully formed planets likely send some planets crashing into the star while ejecting others from the system.

The authors do find a solution to the problem of insufficient mass by examining Class I protoplanetary disks. The solid mass distribution of these disks is shown in violet in Figure 2, and they clearly provide sufficient mass to build the planetary systems of all sizes. This means that the planet formation process must start very quickly, with dust already beginning to coalesce into planetesimals during the brief class I phase. Once solid mass is locked up in larger bodies like planetesimals, it is hidden from observations of protoplanetary disks, which explains why the class II disks appear to have much lower masses (the decrease of solid mass from class I to class II disks could have alternatively been explained by material falling onto the star or being ejected from the system by jets). The mechanism of planetesimal formation is still debated by theorists, but these results lend strong support to those theories that can build planetesimals quickly.

Cumulative distribution of solid masses in protoplanetary disks and mature planetary systems. The x axis is the solid mass, and the y axis is the fraction of systems in the population with this mass or less. Green and cyan: results of two versions of the simulated distribution of mature planetary systems. Magenta: same simulations but with the microlensing planet results excluded. Black: observed distribution of class II protoplanetary disks. Violet: observed distribution of class I protoplanetary disks.

Figure 2. The cumulative distributions of the mass of solids in protoplanetary disks and mature planetary systems. The x axis is the mass of solids, and the y axis is the fraction of systems in the population with this mass or greater. Green and cyan: results from two versions of the simulated distribution of mature planetary systems. Magenta: the same simulations but with the microlensing planet results excluded. Black: the observed distribution of class II protoplanetary disks. Violet: the observed distribution of class I protoplanetary disks.

by Nick Ballering at October 13, 2014 04:29 PM

arXiv blog

The Great Beyond - Nature blog

Tragedy strikes Taiwanese research ship
TAIWAN-ACCIDENT

The sinking of the Ocean Research V in an image from a video released by Taiwan’s Coast Guard.

Taiwan Coast Guard/AFP via Getty

Two scientists died on 11 October after the research vessel they were on, Taiwan’s Ocean Research V, capsized in the Taiwan Strait. Another 25 scientists and 18 crew members were rescued. 

The 73-metre, 2,700-tonne vessel, which had been operating only since February 2013, cost 1.5 billion new Taiwan dollars (US$50 million). It had three laboratories, sonar for seafloor mapping, multiple plankton samplers and other devices for comprehensive ocean exploration. It was built to carry out scientific and as well as resource surveys, including sampling sea-bed gas hydrates and offshore wind turbine sites.

The Ocean Research V was also equipped with a dynamic positioning system to enable it “to conduct highly precise action on sea even under strong winds in the situation of typhoon or strong monsoon“, according to the Taiwan Ocean Research Institute, which operated it. But on the night of 10 October, one day after setting sail, the ship capsized near Penghu island, some 50 kilometres off of Taiwan’s western coast. Some speculate that it hit a reef after being blown off course by strong winds related to a typhoon. 

Hsu Shih-chieh, a researcher at the Academia Sinica in Taipei, reportedly died after making efforts to save his fellow researchers. Lin Yi-chun, a scientist at the Taiwan Ocean Research Institute, also died.

The Ministry of Science and Technology is now investigating the cause of the accident.

by David Cyranoski at October 13, 2014 01:42 PM

Symmetrybreaking - Fermilab/SLAC

Q&A: Katherine Freese

The new director of the Nordic Institute for Theoretical Physics talks neutrinos, women in science, and the hunt for dark matter.

Katherine Freese admits she didn’t do well in her first college physics course, but her impressive resume tells the rest of the story.

At MIT, she helped develop the theory of natural inflation, a model of early universe expansion that has survived nearly 25 years of experimental data. In 1991, she became the first female professor hired to the physics department at the University of Michigan.

Now the director of the international theoretical physics institute NORDITA in Stockholm, Sweden, and the author of Cosmic Cocktail: Three Parts Dark Matter, Freese spoke to symmetry about her career in particle physics and the search for unknown particles.

 

S: You weren’t studying cosmology at the start of the 1980s. What were you up to?

KF: I was a Columbia University graduate student working on a neutrino experiment at Fermilab [near Chicago]. We were trying to measure the neutrino mass.

 

S: What was happening in cosmology at that time?

KF: Alan Guth had thought of inflation. This is the idea that very early in the universe—around 10-35 seconds after the big bang—you had this very rapid exponential growth of the universe that smooths things out. It explains why, on the very largest scales, the universe looks homogeneous. The standard model of the big bang is great, but it has flaws, and this inflation corrected one of those flaws—it filled in the missing piece.

 

S: Why did you switch fields?

KF: I wanted to have some reason to go into the city of Chicago, and I discovered that there was a cosmology class. I went there, and I listened to David Schramm talk about this new field. I thought it was really, really great; he was a very inspiring man.

He got interested in me after I was the only one to ace his midterm and he suggested, “Would you like to work with me on a theory project about neutrinos?” I came back to him and said, “How about I switch to work with you?” David Schramm was the founder of the particle astrophysics theory group at Fermilab.

 

S: Was it challenging to be a woman in physics then?

KF: I became very aware of being a woman in physics once I had a baby. There was no maternity leave, and it was my first baby and my first teaching appointment all at the same time. It was a horrible experience. 

 

S: What about now?

KF: Michigan gives a semester leave to every junior faculty member now, so that even men are able to take paternity leave. That’s a big improvement. Yet, the overall culture can still be negative. You very rarely get positive reinforcement—somebody saying, “Oh, that’s a good piece of work!” It’s very uncomfortable. I think a lot of men see this as kind of a challenge, whereas women just think, “Oh my God, I must be stupid. I should just leave this field.”

 

S: You’ve said that Sweden is especially accepting of women in science. How is it different?

KF: I can only say that everywhere else I went, I was always aware of being a woman. But when I was on the board in Stockholm at the Oscar Klein Center for Cosmoparticle Physics, we were just talking about subject matter, and nobody was putting the gender factor behind any of this conversation.

 

S: Now one of your topics of interest is dark matter. How has the search for dark matter changed for you over the years?

KF: Dark matter, we think, is some new kind of fundamental particle. And the best candidates are called WIMPs. It stands for Weakly Interacting Massive Particles. That’s what we’re looking for.

In the mid-1980s, I was making theoretical calculations of scattering of WIMPs on ordinary matter and predictions for what people would see if you built [dark matter] detectors. Based on these predictions, people started to build them. Now there are experiments all over the world.

 

S: What if those experiments are inconclusive?

KF: It’s a three-pronged approach.

The second is at the LHC at CERN, looking at the collisions of two protons and making a decay chain of particles that includes dark matter. Nothing yet from the LHC, but they’re going to turn on again in 2015 at double their energy, so we’ll see what happens.

The third prong is where you have two WIMPs, they hit each other, they annihilate and they turn into something detectable. The annihilation product of particular interest these days is high-energy photons called gamma rays. The Fermi Satellite, which is seeing the gamma-ray sky, sees an excess toward the center of the galaxy. So that’s an exciting place to look.

Everybody thinks we have a good chance of detection in the next decade.

 

S: How will it change the way we view our universe to figure this out?

KF: In building these detectors you’re developing new technology, and that can have interesting offshoots. I mean, the biggest data set on Earth is what comes out of CERN, so it really drives computer science.

This question of what the universe is made of is an age-old quest. And if you discover something major like this, it’s a big deal. People ask, “How do things like this impact our daily lives?” And until you find it, you don’t know.

 

Like what you see? Sign up for a free subscription to symmetry!

by Troy Rummler at October 13, 2014 01:00 PM

October 12, 2014

Michael Schmitt - Collider Blog

Neural Networks for Triggering

As experiments push the high-energy frontier, and also the intensity frontier, they must contend with higher and higher instantaneous luminosities. This challenge drives experimenters to try new techniques for triggering that might have sounded outlandish or fanciful ten years ago.

The Belle II experiment posted a paper this week on using (artificial) neural networks at the first trigger level for their experiment (arXiv:1410.1395). To be explicit: they plan to implement an artificial neural network at the hardware-trigger level, L1, i.e., the one that deals with the most primitive information from the detector in real time. The L1 latency is 5 μs which allows only 1 μs for the trigger decision.

At issue is a major background coming from Touschek scattering. The coulomb interaction of the e- and e+ beams can transform a small transverse phase space into a long longitudinal phase space. (See a DESY report 98-179 for a discussion.) The beam is thereby spread out in the z direction leading to collisions taking place far from the center of the apparatus. This is a bad thing for analysis and for triggering since much of the event remains unreconstructed — such events are a waste of bandwidth. The artificial neural networks, once trained, are mechanistic and parallel in the way they do their calculations, therefore they are fast – just what is needed for this situation. The interesting point is that here, in the Belle application, decisions about the z position of the vertex will be made without reconstructing any tracks (because there is insufficient time to carry out the reconstruction).

The CDC has 56 axial and stereo layers grouped into nine superlayers. Track segments are found by the TSF based on superlayer information. The 2D trigger module finds crude tracks in the (r,φ) plane. The proposed neutral network trigger takes information from the axial and stereo TSF, and also from the 2D trigger module.

Diagram of the Belle trigger.

Diagram of the Belle trigger.

As usual, the artificial neural network is based on the multi-layer perceptron (MLP) with a hyperbolic tangent activation function. The network is trained by back-propagation. Interestingly, the authors use an ensemble of “expert” MLPs corresponding to small sectors in phase space. Each MLP is trained on a subset of tracks corresponding to that sector. Several incarnations of the network were investigated, which differ in the precise information used as input to the neural network. The drift times are scaled and the left/right/undecided information is represented by an integer. The azimuthal angle can be represented by a scaled wire ID or by an angle relative to the one produced by the 2D trigger. There is a linear relation between the arc length and the z coordinate, so the arc length (μ) can also be a useful input variable.

As a first test, one sector is trained for a sample of low-pT and another sample of high-pT tracks. The parameter range is very constrained, and the artificial neural networks do well, achieving a resolution of 1.1 – 1.8 cm.

In a second test, closer to the planned implementation, the output of the 2D trigger is represented by some smeared φ and pT values. The track parameters cover a wider range than in the first test, and the pT range is divided into nine pieces. The precision is 3 – 7cm in z, which is not yet good enough for the application (they are aiming for 2 cm or better). Nonetheless, this estimate is useful because it can be used to restrict the sector size for the next step.

Resolution on z versus curvature, for three versions of the neural network.

Resolution on z versus curvature, for three versions of the neural network.

Clearly this is a work in progress, and much remains to be done. Assuming that the Belle Collaboration succeeds, the fully pipelined neural network trigger will be realized on FPGA boards.


by Michael Schmitt at October 12, 2014 08:16 PM

Clifford V. Johnson - Asymptotia

Big Draw LA
Big_draw_la_11th_october_2014_1_smallThe Big Draw LA event downtown today (in Grand Park) was a lot of fun! There were all sort of stations of activity, and lots of people were jointing in with drawing in various media, including making masks (so drawings and cut-outs) and making drawings on the concrete plaza area using strips of tape, which I thought was rather clever. (I forgot to photograph any of those, but look on twitter - and I presume instagram - under #thebigdrawla for things people have been posting.) One of the most interesting things was the construction made of drawings that people did on pieces of slate that lock together to make a larger structure. Have a look in the pictures below (click thumbnails for larger views). There were several, but maybe still not enough adults involved, in my opinion (at least when I went by). Perhaps this was due to a "I can't draw and it is too late for me, but there is hope for the children" line of reasoning? Bad reasoning - everyone can draw! Join in, all ages. There are events all around the city (see links below). big_draw_la_11_oct_2014_3 big_draw_la_11_oct_2014_2 big_draw_la_11_oct_2014_5 The pursuit that had the highest proportion of adults was the costumed figure [...] Click to continue reading this post

by Clifford at October 12, 2014 01:43 AM

October 11, 2014

The n-Category Cafe

M-theory, Octonions and Tricategories

Quite a witches’ brew, eh?

Amazingly, they seem to be deeply related. John Huerta has just finished a paper connecting them… and this concludes a series of papers that makes me very happy, because it fulfills a long-held dream: to connect physics, division algebras, and higher categories.

Let me start with a very simple sketchy explanation. Experts should please forgive the inaccuracies in this first section: it’s hard to tell a story that’s completely accurate without getting bogged down in detail!

The rough idea

You’ve probably heard rumors that superstring theory lives in 10 dimensions and something more mysterious called M-theory lives in 11. You may have wondered why.

In fact, there’s a nice way to write down theories of superstrings in dimensions 3, 4, 6, and 10 — at least before you take quantum mechanics into account. Of these theories, it seems you can only consistently quantize the 10-dimensional version. But never mind that. What’s so great about the numbers 3, 4, 6 and 10?

What’s so great is that they’re 2 more than 1, 2, 4, and 8.

There are only normed division algebras in dimensions 1, 2, 4, and 8. The real numbers are 1-dimensional. The complex numbers are 2-dimensional. There are also more esoteric options: the quaternions are 4-dimensional, and the octonions are 8-dimensional. When you try to go beyond these, you lose the law that

<semantics>|xy|=|x||y|<annotation encoding="application/x-tex"> |x y| = |x| |y| </annotation></semantics>

and things aren’t so nice.

I’ve spent decades studying the quaternions and octonions, just because they’re weird and interesting. Why do the dimensions double each time in this game? There’s a nice answer. What happens if you go further, to dimension 16? I’ve learned a bit about that too, though I bet there are big mysteries still lurking here.

Most important, what — if anything — do normed division algebras have to do with physics? The jury is still out on this one, but there are some huge clues. Most fundamentally, a normed division algebra of dimension <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics> gives a nice unified way to describe both spin-1 and spin-1/2 particles in <semantics>(n+2)<annotation encoding="application/x-tex">(n+2)</annotation></semantics>-dimensional spacetime! The gauge bosons in nature are spin-1 particles, while the fermions are spin-1/2 particles. We’d definitely like a good theory of physics to fit these together somehow.

One cool thing is this. A string is a curve, so it’s 1-dimensional, but as time passes it traces out a 2-dimensional surface. So, if we have a string floating around in some spacetime, we’ve got a 2d surface together with some extra dimensions of spacetime. It turns out to be very good to put complex coordinates on that 2d surface. Then you can describe how the string wiggles in the extra dimensions using equations that have symmetry under conformal transformations.

But for the string to be ‘super’ — for it to have supersymmetry, a symmetry between bosons and fermions — we need a certain special identity to hold, called the 3-<semantics>ψ<annotation encoding="application/x-tex">\psi</annotation></semantics>’s rule. And this holds precisely when we can take the extra dimensions and think of them as forming a normed division algebra!

So, we need 1, 2, 4 or 8 extra dimensions. So the total dimension of spacetime needs to be 3, 4, 6, or 10. Not at all coincidentally, these are also the dimensions where spin-1 and spin-1/2 particles can be described using a normed division algebra.

(This is a very rough sketch of a complicated argument, of course. I’m leaving out the details, but later I’ll show you where to find them.)

We can also look at theories of ‘branes’, which are like strings but higher-dimensional. Instead of a curve, a 2-brane is a 2-dimensional surface. As time passes, it traces out a 3-dimensional surface. So, if we have a 2-brane floating around in some spacetime, we’ve got a 3-dimensional surface together with some extra dimensions of spacetime. And it turns out that 2-branes can also have supersymmetry when the extra dimensions can be seen as a normed division algebra!

So now the total dimension of spacetime needs to be 3 more than 1, 2, 4, and 8. It needs to be 4, 5, 7 or 11.

When we take quantum mechanics into account it seems that the 11-dimensional theory works best… but the quantum aspects are still mysterious, murky and messy compared to superstring theory, so it’s called M-theory.

In his new paper, John Huerta has shown that using the octonions we can build a ‘super-3-group’, an algebraic structure that seems just right for understanding the symmetries of supersymmetric 2-branes in 11 dimensions.

I could say a lot more, but if you want more explanation without too much fancy math, try this:

This is a fun and easy article about this stuff, which we wrote for Scientific American.

The details

The detailed story has four parts.

  • John Baez and John Huerta, Division algebras and supersymmetry I, in Superstrings, Geometry, Topology, and C*-Algebras, eds. Robert Doran, Greg Friedman and Jonathan Rosenberg, Proc. Symp. Pure Math. 81, AMS, Providence, 2010, pp. 65–80.

Here we explain how to use normed division algebras to describe vectors (and thus spin-1 particles) and spinors (and thus spin-1/2 particles) in spacetimes of dimensions 3, 4, 6 and 10. We use this description to derive the 3-<semantics>ψ<annotation encoding="application/x-tex">\psi</annotation></semantics>’s rule, an identity obeyed by three spinors only in these special dimensions. We also explain how the 3-<semantics>ψ<annotation encoding="application/x-tex">\psi</annotation></semantics>’s rule is important in supersymmetric Yang–Mills theory. This stuff was known before, but not explained all in one place.

Here go up a dimension and use normed division algebras to derive a special identity that is obeyed by 4 spinors in dimensions 4, 5, 7 and 11. This is called the 4-<semantics>Ψ<annotation encoding="application/x-tex">\Psi</annotation></semantics>’s rule, and it’s important for supersymmetric 2-branes.

More importantly, we start studying how the symmetries of superstrings and super-2-branes arise from the normed division algebras. Mathematicians and physicists use Lie algebras to study symmetry, as well as generalizations called ‘Lie superalgebras’, which describe symmetries that mix bosons and fermions. Here we study categorified versions called ‘Lie 2-superalgebras’ and ‘Lie 3-superalgebras’. It turns out that the 3-<semantics>ψ<annotation encoding="application/x-tex">\psi</annotation></semantics>’s rule is a ‘3-cocycle condition’ — just the thing you need to build a Lie 2-superalgebra extending the Poincaré Lie superalgebra! Similarly, the 4-<semantics>Ψ<annotation encoding="application/x-tex">\Psi</annotation></semantics>’s rule is a ‘4-cocycle condition’ which lets you build a Lie 2-superalgebra extending the Poincaré Lie superalgebra.

Next, try this:

At this point John Huerta sailed off on his own!

In this paper John cooked up the ‘Lie 2-supergroups’ that govern classical superstrings in dimensions 3, 4, 6 and 10. Just as a group is a category with one object and with all its morphisms being invertible, a 2-group is a bicategory with one object and with all its morphisms and 2-morphisms being weakly invertible. A Lie 2-supergroup is a bicategory internal to the category of supermanifolds. John shows how to derive the pentagon identity for this bicategory from the 3-<semantics>ψ<annotation encoding="application/x-tex">\psi</annotation></semantics>’s rule!

And here’s his new paper, the last of the series:

Here John built the ‘Lie 3-supergroups’ that govern classical super-2-branes in dimensions 3, 4, 6 and 10. A 3-group is a tricategory with one object and with all its morphisms, 2-morphisms and 3-morphisms being weakly invertible. John shows how to derive the ‘pentagonator identity’ — that is, a commutative diagram shaped like the 3d Stasheff polytope — from the 4-<semantics>Ψ<annotation encoding="application/x-tex">\Psi</annotation></semantics>’s rule.

In case you’re wondering: I believe this game stops here. I’m pretty sure there isn’t a nontrivial 5-cocycle (valued in the trivial representation) which gives a Lie 4-superalgebra extending the Poincaré superalgebra in 12 dimensions. But I hope someone proves this, or has proved it already!

Of course, Urs Schreiber and collaborators have done vastly more general things using a more intensely modern point of view. For example:

But one thing the ‘Division Algebras and Supersymmetry’ series has to offer is a focus on the way normed division algebras help create the exceptional higher algebraic structures that underlie superstring and super-2-brane theories. And with the completion of this series, I can now relax and forget all about these ideas, confident that at this point, the minds of a younger generation will do much better things with them than I could.

I should add that Layra Idarani has been ‘live-blogging’ his reading of John Huerta’s new paper:

So, you can get another perspective there.

by john (baez@math.ucr.edu) at October 11, 2014 06:13 PM

John Baez - Azimuth

Network Theory Seminar (Part 1)

 

Check out this video! I start with a quick overview of network theory, and then begin building a category where the morphisms are electrical circuits. These lecture notes provide extra details:

Network theory (part 30).

With luck, this video will be the first of a series. I’m giving a seminar on network theory at U.C. Riverside this fall. I’ll start by sketching the results here:

• John Baez and Brendan Fong, A compositional framework for passive linear networks.

But this is a big paper, and I also want to talk about other papers, so I certainly won’t explain everything in here—just enough to help you get started! If you have questions, don’t be shy about asking them.

I thank Blake Pollard for filming this seminar, and Muhammad “Siddiq” Siddiqui-Ali for providing the videocamera and technical support.


by John Baez at October 11, 2014 04:44 PM

Tommaso Dorigo - Scientificblogging

Cold Fusion: A Better Study On The Infamous E-Cat
Do you remember the E-Cat ? That is an acronym for "energy catalyzer", the device invented by the Italian philosopher Andrea Rossi. The E-Cat is claimed to produce nuclear energy through the heating of a "secret" powder made up of nichel, hydrogen, and lithium plus some additives. A new chapter was added to the saga of the E-Cat this week, with the publication of a new study by an allegedly independent group of Italian and Swedish researchers.

read more

by Tommaso Dorigo at October 11, 2014 10:03 AM

October 10, 2014

Quantum Diaries

Good Management is Science

Management done properly satisfies Sir Karl Popper’s (1902 – 1994) demarcation criteria for science, i.e. using models that make falsifiable or at least testable predictions. That was brought home to me by a book[1] by Douglas Hubbard on risk management where he advocated observationally constrained (falsifiable or testable) models for risk analysis evaluated through Monte Carlo calculations. Hmm, observationally constrained models and Monte Carlo calculations, sounds like a recipe for science.

Let us take a step back. The essence of science is modeling how the universe works and checking the assumptions of the model and its predictions against observations. The predictions must be testable. According to Hubbard, the essence of risk management is modeling processes and checking the assumptions of the model and its predictions against observations. The predictions must be testable. What we are seeing here is a common paradigm for knowledge in which modeling and testing against observation play a key role.

The knowledge paradigm is the same in project management. A project plan, with its resource loaded schedules and other paraphernalia, is a model for how the project is expected to proceed. To monitor a project you check the plan (model) against actuals (a fancy euphemism for observations, where observations may or may not correspond to reality). Again, it reduces back to observationally constrained models and testable predictions.

The foundations of science and good management practices are tied even closer together. Consider the PDCA cycle for process management that is present, either implicitly or explicitly, in essentially all the ISO standards related to management. It was originated by Walter Shewhart (1891 – 1967), an American physicist, engineer and statistician, and popularized by Edwards Deming (1900 – 1993), an American engineer, statistician, professor, author, lecturer and management consultant. Engineers are into everything. The actual idea of the cycle is based on the ideas of Francis Bacon (1561 – 1629) but could equally well be based on the work of Roger Bacon[2] (1214 – 1294). Hence, it should probably be called the Double Bacon Cycle (no, that sounds too much like a breakfast food).

But what is this cycle? For science, it is: plan an experiment to test a model, do the experiment, check the model results against theCapture observed results, and act to change the model in response to the new information from the check stage or devise more precise tests if the predictions and observations agree. For process management replace experiment with production process. As a result, you have a model for how the production process should work and doing the process allows you to test the model. The check stage is where you see if the process performed as expected and the act stage allows you to improve the process if the model and actuals do not agree. The key point is the check step. It is necessary if you are to improve the process; otherwise you do not know what is going wrong or, indeed, even if something is going wrong. It is only possible if the plan makes predictions that are falsifiable or at least testable. Popper would be pleased.

There is another interesting aspect of the ISO 9001 standard. It is based on the idea of processes. A process is defined as an activity that converts inputs into outputs. Well, that sound rather vague, but the vagueness is an asset, kind of like degrees of freedom in an effective field theory. Define them as you like but if you choose them incorrectly you will be sorry. The real advantage of effective field theory and the flexible definition of process is that you can study a system at any scale you like. In effective field theory, you study processes that operate at the scale of the atom, the scale of the nucleus or the scale of the nucleon and tie them together with a few parameters. Similarly with processes, you can study the whole organization as a process or drill down and look at sub process at any scale you like, for CERN or TRIUMF that would be down to the last magnet. It would not be useful to go further and study accelerator operations at the nucleon scale. At a given scale different processes are tied together by their inputs and outputs and these are also used to tie process at different scales.

As a theoretical physicist who has gone over to the dark side and into administration, I find it amusing to see the techniques and approaches from science being borrowed for use in administration, even Monte Carlo calculations. The use of similar techniques in science and administration goes back to the same underlying idea: all true knowledge is obtained through observation and its use to build better testable models, whether in science or other walks of life.

[1] The Failure of Risk Management: Why It’s Broken and How to Fix It by Douglas W. Hubbard (Apr 27, 2009)

[2] Roger Bacon described a repeating cycle of observation, hypothesis, and experimentation.

by Byron at October 10, 2014 10:30 PM

Quantum Diaries

Physics Laboratory: Back to Basics

Dark matter –  it’s essential to our universe, it’s mysterious and it brings to mind cool things like space, stars, and galaxies. I have been fascinated by it since I was a child, and I feel very lucky to be a part for the search for it. But that’s not actually what I’m going to be talking about today.

I am a graduate student just starting my second year in the High Energy Physics group at UCL, London. Ironically, as a dark matter physicist working in the LUX (Large Underground Xenon detector) and LZ (LUX-ZEPLIN) collaborations, I’m actually dealing with very low energy physics.
When people ask what I do, I find myself saying different things, to differing responses:

  1. “I’m doing a PhD in physics” – reaction: person slowly backs away
  2. “I’m doing a PhD in particle physics” – reaction: some interest, mention of the LHC, person mildly impressed
  3. “I’m doing a PhD in astro-particle physics” – reaction: mild confusion but still interested, probably still mention the Large Hadron Collider
  4. “I’m looking for dark matter!” – reaction: awe, excitement, lots of questions

This obviously isn’t true in all cases, but has been the general pattern assumed. Admittedly, I enjoy that people are impressed, but sometimes I struggle to find a way to explain to people not in physics what I actually do day to day. Often I just say, “it’s a lot of computer programming; I analyse data from a detector to help towards finding a dark matter signal”, but that still induces a panicked look in a lot of people.

Nevertheless, I actually came across a group of people who didn’t ask anything about what I actually do last week, and I found myself going right back to basics in terms of the physics I think about daily. Term has just started, and that means one thing: undergraduates. The frequent noise they make as they stampede past my office going the wrong way to labs makes me wonder if the main reason for sending them away for so long is to give the researchers the chance to do their work in peace.

Nonetheless, somehow I found myself in the undergraduate lab on Friday. I had to ask myself why on earth I had chosen to demonstrate – I am, almost by definition, terrible in a lab. I am clumsy and awkward, and even the most simple equipment feels unwieldy in my hands. During my own undergrad, my overall practical mark always brought my average mark down for the year. My masters project was, thank god, entirely computational. But thanks to a moment of madness (and the prospect of earning a little cash, as London living on a PhD stipend is hard), I have signed up to be a lab demonstrator for the new first year physicists.

Things started off awkwardly as I was told to brief them on the experiment and realised I had not a great deal to say.  I got more into the swing of things as time went by, but I still felt like I’d been thrown in the deep end. I told the students I was a second year PhD student; one of them got the wrong end of the stick and asked if I knew a student who was a second year undergrad here. I told him I was postgraduate and he looked quite embarrassed, whilst I couldn’t help but laugh at the thought of the chaos that would ensue if a second year demonstrated the first year labs.

oscilloscope

The oscilloscope: the nemesis of physics undergrads in labs everywhere

None of them asked what my PhD was in. They weren’t interested – somehow I had become a faceless authority who told them what to do and had no other purpose. I am not surprised – they are brand new to university, and more importantly, they were pretty distracted by the new experience of the laboratory. That’s not to say they particularly enjoyed it, they seemed to have very little enthusiasm for the experiment. It was a very simple task: measuring the speed of sound in air using a frequency generator, an oscillator and a ruler. For someone now accustomed to dealing with data from a high tech dark matter detector, it was bizarre! I do find the more advanced physics I learn, the worse I become at the basics, and I had to go aside for a moment with a pen and paper to reconcile the theory in my head – it was embarrassing, to say the least!

Their frustration at the task was evident – there were frequent complaints over the length of time they were writing for, over the experimental ‘aims’ and ‘objectives’, of the fact they needed to introduce their diagrams before drawing them, etc. Eyes were rolling at me. I was going to have to really try to drill it in that this was indeed an important exercise. The panic I could sense from them was a horrible reminder of how I used to feel in my own labs. It’s hard to understand at that point that this isn’t just some form of torture, you are actually learning some very valuable and transferrable skills about how to conduct a real experiment. Some examples:

  1. Learn to write EVERYTHING down, you might end up in court over something and some tiny detail might save you.
  2. Get your errors right. You cannot claim a discovery without an uncertainty, that’s just physics. Its difficult to grasp, but you can never fully prove a hypothesis, only provide solid evidence towards it.
  3. Understand the health and safety risks – they seem pointless and stupid when the only real risk seems to be tripping over your bags, but speaking as someone who has worked down a mine with pressurised gases, high voltages and radioactive sources, they are extremely important and may be the difference between life and death.

In the end, I think my group did well. They got the right number for the speed of sound and their lab books weren’t a complete disaster. A few actually thanked me on their way out. 

It was a bit of a relief to get back to my laptop where I actually feel like I know what I am doing, but the experience was a stark reminder of where I was 5 years ago and how much I have learned. Choosing physics for university means you will have to struggle to understand things, work hard and exhaust yourself, but in all honestly it was completely worth it, at least for me. Measuring the speed of sound in air is just the beginning. One day, some of those students might be measuring the quarks inside a proton, or a distant black hole, or the quantum mechanical properties of a semiconductor. 

I’m back in the labs this afternoon, and I am actually quite looking forward to seeing how they cope this week, when we study that essential pillar of physics, conservation of momentum. I just hope they don’t start throwing steel ball-bearings at each other. Wish me luck.

by Sally Shaw at October 10, 2014 09:08 PM

Sean Carroll - Preposterous Universe

The Evolution of Evolution: Gradualism, or Punctuated Equilibrium?

In some ways I’m glad I’m not an evolutionary biologist, even though the subject matter is undoubtedly fascinating and fundamental. Here in the US, especially, it’s practically impossible to have a level-headed discussion about the nature of evolutionary theory. Biologists are constantly defending themselves against absurd attacks from creationists and intelligent-design advocates. It can wear you down and breed defensiveness, which is not really conducive to carrying on a vigorous discussion about the state of that field.

But such discussions do exist, and are important. Here’s an interesting point/counter-point in Nature, in which respectable scientists argue over the current state of evolutionary theory: is it basically in good shape, simply requiring a natural amount of tweaking and updating over time, or is revolutionary re-thinking called for?

Illustration cichlids from different lakes, by R. Craig Albertson.

Illustration cichlids from different lakes, by R. Craig Albertson.

I’m a complete novice here, so my opinion should count for almost nothing. But from reading the two arguments, I tend to side with the gradualists on this one. As far as I can tell, the revolutionaries make their case by setting up a stripped-down straw-man version of evolution that nobody really believes (nor ever has, going back to Darwin), then proclaiming victory when they show that it’s inadequate, even though nobody disagrees with them. They want, in particular, to emphasize the roles of drift and development and environmental feedback — all of which seem worth emphasizing, but I’ve never heard anyone deny them. (Maybe I’m reading the wrong people.) And they very readily stoop to ad hominem psychoanalysis of their opponents, saying things like this:

Too often, vital discussions descend into acrimony, with accusations of muddle or misrepresentation. Perhaps haunted by the spectre of intelligent design, evolutionary biologists wish to show a united front to those hostile to science. Some might fear that they will receive less funding and recognition if outsiders — such as physiologists or developmental biologists — flood into their field.

Some might fear that, I guess. But I’d rather hear a substantive argument than be told from the start that I shouldn’t listen to those other folks because they’re just afraid of losing their funding. And the substantive arguments do exist. There’s no question that the theory of evolution is something that is constantly upgraded and improved as we better understand the enormous complexity of biological processes.

The gradualists (in terms of theory change, not necessarily in terms of how natural selection operates), by contrast, seem to make good points (again, to my non-expert judgment). Here’s what they say in response to their opponents:

They contend that four phenomena are important evolutionary processes: phenotypic plasticity, niche construction, inclusive inheritance and developmental bias. We could not agree more. We study them ourselves.

But we do not think that these processes deserve such special attention as to merit a new name such as ‘extended evolutionary synthesis’…

The evolutionary phenomena championed by Laland and colleagues are already well integrated into evolutionary biology, where they have long provided useful insights. Indeed, all of these concepts date back to Darwin himself, as exemplified by his analysis of the feedback that occurred as earthworms became adapted to their life in soil…

We invite Laland and colleagues to join us in a more expansive extension, rather than imagining divisions that do not exist.

Those don’t really read like the words of hidebound reactionaries who are unwilling to countenance any kind of change. It seems like a mistake for the revolutionaries to place so much emphasis on how revolutionary they are being, rather than concentrating on the subtle work of figuring out the relative importance of all these different factors to evolution in the real world — the importance of which nobody seems to deny, but the quantification of which is obviously a challenging empirical problem.

Fortunately physicists are never like this! It can be tough to live in a world of pure reason and unadulterated rationality, but someone’s got to do it.

by Sean Carroll at October 10, 2014 07:36 PM

astrobites - astro-ph reader's digest

Cold Brown Dwarfs

Title: The Luminosities of the Coldest Brown Dwarfs
Authors:
C. G. Tinney, Jacqueline K. Faherty, J. Davy Kirkpatrick, Mike Cushing, Caroline V. Morley, and Edward L. Wright
First Author’s Institution:
UNSW, Australia
Status: Accepted to The Astrophysical Journal

Y dwarfs are a new addition (since 2011) to the very bottom of the stellar classification scheme. The trusted mnemonic “Oh, Be A Fine Girl/Guy, Kiss Me” might need some extra words, because we keep finding examples of cooler and cooler objects, including the L, T, and now Y dwarfs. A brown dwarf is often thought of as a failed star; it might fuse deuterium at some point, but isn’t massive enough to sustain hydrogen fusion. There is also ongoing debate over what the dividing line is between a brown dwarf and a planet, since at the smaller end of their distribution, these objects resemble large Jupiters much more than a tiny sun. Y dwarfs have temperatures less than 500K and masses between only 5 and 30 Jupiter masses. Temperature-wise, this places them neatly between the gas giants in our own solar system, around 130K, and the planets we find by direct imaging, at 1000-1500K. As you might imagine, finding these guys can be tough, and we’re only just beginning to understand their complicated atmospheres. Luckily brown dwarfs are relatively numerous, so there are enough close-by that we can sort out some answers.

Screen Shot 2014-10-09 at 8.25.31 AM

Figure 1. Spectral type – magnitude diagram. You can clearly see in this chart that some objects’ assigned spectral type and measured absolute magnitude puts them well outside the established trend.

One of the most basic questions we ask about an object in astronomy is, “How bright is it?” Finding accurate bolometric luminosities depends on knowing the absolute magnitude, and therefore an accurate distance to these objects. For brown dwarfs, this is usually found by trigonometric parallax, which is always challenging. The authors illustrate this by pointing out that an object at 25 pc will move by only 40 milliarcseconds. That’s an incredibly tiny distance to measure accurately, yet techniques are always improving. The authors therefore conducted a targeted search for the parallax motions of very faint Y dwarfs, in order to achieve more accurate absolute magnitudes.

They plot their results first in a spectral type-magnitude diagram in Figure 1.  They plot T dwarfs for comparison at the brighter, hotter edge of the diagram, to show how the Y dwarfs fit it to the larger trend. The authors point out that spectral type (which is assigned by looking at the shape of an object’s spectrum), can be difficult to pin down for these objects, and is usually uncertain at about the .5 sub-type. So a T9.5 could actually be a Y0. But even with this uncertainty, there’s a lot of scatter, with many objects on this plot falling well away from the median line (in black). This means they are substantially either brighter or dimmer than can be explained by the uncertainties.

Screen Shot 2014-10-09 at 8.25.09 AM

Figure 2. Color-magnitude diagram. Lines represent a variety of atmospheric models with different kinds of clouds. These serve as predictions for where objects are expected to fall on the plot. The presence and type of clouds can affect the observed color of the object.

The same scatter carries over to Figure 2, a more traditional color-magnitude diagram. Part of the scatter is due to the fact that accurately modeling these objects is extremely challenging. At these low temperatures, all kinds of molecular features come into play, and cloud physics and radiative transfer are tricky to calculate. Consequently, we have a range of expectations for where we expect to see these Y dwarfs on a color-magnitude diagram, and even with that range, there are objects that are far too bright for their color. The authors suggest that these could be unresolved binaries, in which case adaptive optics imaging could resolve the issue.

Another option for targets missing the predicted lines is that they are cloudy. The bands used for this study are the near infrared J band, a filter which peers deep into the atmosphere, and the WISE W2 band, whose flux is emitted from high altitudes. Sulphide and salt clouds form lower in the atmosphere, decreasing J band flux and making objects redder than their cloud-free equivalents, in which case they will sit above predicted trendline and appear overly bright. On the other hand, water ice clouds form higher in the atmosphere and affect the W2 band more strongly, so they will cause bluer colors, and these objects may appear too dim.

Of course, at some point, this all comes back to physics. Why do we see such large scatter on the color-magnitude diagram between otherwise similar objects? Why should one Y dwarf have thick clouds and another an almost transparent atmosphere? The answer probably lies in cloud variability. We have evidence for L and T dwarfs showing variability in time, so it would not be too surprising to see that the same Y dwarf could wander around this diagram as it rotates, showing us a cloudy side one day and clear skies the next. So now that we know where at least some of these Y dwarfs are, it’s time to watch them very carefully, and see what they can tell us about their clouds.

by Korey Haynes at October 10, 2014 01:05 PM

Symmetrybreaking - Fermilab/SLAC

‘CERN People’ tells it like it is

A new video series about scientists at CERN pulls back the curtain on what it’s like to be a physicist during a pivotal time in the field.

American director and documentary film maker Liz Mermin has traveled from beauty schools in Afghanistan to Bollywood movie sets in India filming people at their work. In 2011, her documentation of unconventional office environments brought her to CERN.

“I always knew about CERN in a vague way, but I did not know much about it,” Mermin says. “But I was interested in CERN because my father is a physicist.”

Over the course of two years, Mermin and her co-director returned to CERN about a dozen times to film physicists at work. From this footage they created “CERN People,” a series of short films that focus on the challenge and excitement of being at CERN during the time leading up to and after discovery of the Higgs boson.

“In these little vignettes, each one has a theme or an idea that we are exploring,” Mermin says. “Vignettes require a different kind of demand on your attention than a feature film, so we had a little fun with it.”

One of Mermin’s favorite videos is “Tau Trouble,” which features Brazilian-American physicist Phil Harris, who was looking for the Higgs boson decaying into tau leptons during the lead-up to the discovery of the Higgs boson.

“In the run-up to the big announcement, the scientists working on the tau channel were not seeing any evidence of a Higgs boson and were skeptical if this really was the Higgs boson or not,” Mermin says. “In this video, we wanted to get across the idea that there are lots of different ways to look for the Higgs and that the searches don't always see the same thing initially. It was exciting seeing that happen.”

During the filming of this series, Mermin was very impressed with the level of intensity with which physicists approach their work.

“It was difficult to getting people to stop working and talk on camera,” Mermin says. “Everyone was so driven and in a rush to move ahead in work.”

Mermin hopes that this series will give people a better idea about the people behind the research at CERN and the passion with which they approach their work.

“This series is not trying to explain what the Higgs is,” Mermin says. “It is about searching for something new and how this process works. I hope people get a bit of feeling about what goes on and a little more invested in the value of fundamental research like this.”

 

Like what you see? Sign up for a free subscription to symmetry!

by Sarah Charley at October 10, 2014 01:00 PM

October 09, 2014

Jon Butterworth - Life and Physics

Why so much excitement over the Higgs boson?

On 5th September I talked to Robyn Williams of ABC (Australia) The Science Show in a basement room at UCL. We covered the Higgs, the LHC, Smashing Physics, all kinds of things, and he recorded it on a very elderly little machine. Here’s the result.


Filed under: Particle Physics, Physics, Science Tagged: ABC, audio, Higgs, LHC, Smashing Physics, UCL

by Jon Butterworth at October 09, 2014 03:33 PM

The Great Beyond - Nature blog

Stem-cell fraud makes for box office success

Posted on behalf of David Cyranoski and Soo Bin Park

Fictionalized film follows fabricated findings

Stem cell fraudster faces down the journalist who debunks him in the film sweeping Korean cinemas.

Stem-cell fraudster faces down the journalist who debunks him in the film sweeping Korean cinemas.

Wannabe Fun

A movie based on the Woo Suk Hwang cloning scandal drew more than 100,000 viewers on its opening day (2 October) and has been topping box office sales in South Korea since then. With some of the country’s biggest stars, it has made a blockbuster out of a dismal episode in South Korean stem-cell research — and revealed the enduring tension surrounding it.

The movie, Whistleblower, shines a sympathetic light on Woo Suk Hwang, the professor who in 2004 and 2005 claimed to have created stem-cell lines from cloned human embryos. The achievement would have provided a means to make cells genetically identical to a patient’s own, and able to form almost any type of cell in the body. But hopes were shattered when Hwang’s claims turned out to be based on fraudulent data and unethical procurement of eggs. The whistleblower who revealed the fraud says the new movie strays far from reality.

“This topic is sensitive, so I was hesitant when I got the first offer,” said director Yim Soon-rye at the premiere on 16 September in Seoul. “I wanted to portray him [Lee Jang-hwan, Hwang’s character in the film] as a character who faces a very human problem, and to show there is room to understand his actions.”  Although clearly inspired by the real-life events surrounding Hwang and his cloning claims, the film does not aim to be a true representation of events, but a ‘restructured fiction’ created for a movie audience.

The movie broadly traces the scandal as it actually unravelled, tracing the process through which the stem-cell claims were debunked. Some changes are made, apparently for dramatic effect: Snuppy, the Afghan hound produced by cloning in Hwang’s laboratory, was converted into Molly, also an Afghan hound, but one with cancer. When Lee sees the writing on the wall, he is shown going to a Buddhist temple where he rubs Molly’s fur, saying “I came too far … I missed my chance to stop.”

Yim says he wanted the fraudster “to be interpreted multi-dimensionally, rather than as a simple fraud or evil person”.

But rather than the scientists, Yim put the perseverance of the reporter at the centre of the film, and ends up skewing relevant facts, says Young-Joon Ryu, the real whistleblower. Ryu, who had been a key figure in Hwang’s laboratory, says his own contributions and those of online bloggers were credited to the reporter. (The discovery that Hwang had unethically procured eggs, first reported in Nature, was also credited to the reporter.)

The film has refuelled anger in some Hwang supporters who believe, despite evidence to the contrary, that Hwang did have human-cloning capabilities and that the scandal deprived the country of a star scientist. They are back online calling Ryu a betrayer.

Ryu understands that a movie might emphasize “fast action, dramatic conflicts and famous actors” to increase box office revenues. But having suffered through one perversion of the truth as Hwang made his original claims, watching the film he says that he felt was witnessing another.

by Anna Nagle at October 09, 2014 11:23 AM