Particle Physics Planet


March 04, 2015

The n-Category Cafe

Lebesgue's Universal Covering Problem

Lebesgue’s universal covering problem is famously difficult, and a century old. So I’m happy to report some progress:

• John Baez, Karine Bagdasaryan and Philip Gibbs, Lebesgue’s universal covering problem.

But we’d like you to check our work! It will help if you’re good at programming. As far as the math goes, it’s just high-school geometry… carried to a fanatical level of intensity.

Here’s the story:

A subset of the plane has diameter 1 if the distance between any two points in this set is <semantics>1<annotation encoding="application/x-tex">\le 1</annotation></semantics>. You know what a circle of diameter 1 looks like. But an equilateral triangle with edges of length 1 also has diameter 1:

After all, two points in this triangle are farthest apart when they’re at two corners.

Note that this triangle doesn’t fit inside a circle of diameter 1:

There are lots of sets of diameter 1, so it’s interesting to look for a set that can contain them all.

In 1914, the famous mathematician Henri Lebesgue sent a letter to a pal named Pál. And in this letter he challenged Pál to find the convex set with smallest possible area such that every set of diameter 1 fits inside.

More precisely, he defined a universal covering to be a convex subset of the plane that can cover a translated, reflected and/or rotated version of every subset of the plane with diameter 1. And his challenge was to find the universal covering with the least area.

Pál worked on this problem, and 6 years later he published a paper on it. He found a very nice universal covering: a regular hexagon in which one can inscribe a circle of diameter 1. This has area

<semantics>32=0.86602540<annotation encoding="application/x-tex"> \frac{\sqrt{3}}{2} = 0.86602540 \dots </annotation></semantics>

But he also found a universal covering with less area, by removing two triangles from this hexagon—for example, the triangles C1C2C3 and E1E2E3 here:

Our paper explains why you can remove these triangles, assuming the hexagon was a universal covering in the first place. The resulting universal covering has area

<semantics>223=0.84529946<annotation encoding="application/x-tex"> 2 - \frac{2}{\sqrt{3}} = 0.84529946 \dots </annotation></semantics>

In 1936, Sprague went on to prove that more area could be removed from another corner of Pál’s original hexagon, giving a universal covering of area

<semantics>0.8441377708435<annotation encoding="application/x-tex"> 0.8441377708435 \dots </annotation></semantics>

In 1992, Hansen took these reductions even further by removing two more pieces from Pál’s hexagon. Each piece is a thin sliver bounded by two straight lines and an arc. The first piece is tiny. The second is downright microscopic!

Hansen claimed the areas of these regions were <semantics>410 11<annotation encoding="application/x-tex">4 \cdot 10^{-11}</annotation></semantics> and <semantics>610 18<annotation encoding="application/x-tex">6 \cdot 10^{-18}</annotation></semantics>. However, our paper redoes his calculation and shows that the second number is seriously wrong. The actual areas are <semantics>3.750710 11<annotation encoding="application/x-tex">3.7507 \cdot 10^{-11}</annotation></semantics> and <semantics>8.446010 21<annotation encoding="application/x-tex">8.4460 \cdot 10^{-21}</annotation></semantics>.

Philip Gibbs has created a Java applet illustrating Hansen’s universal cover. I urge you to take a look! You can zoom in and see the regions he removed:

• Philip Gibbs, Lebesgue’s universal covering problem.

I find that my laptop, a Windows machine, makes it hard to view Java applets because they’re a security risk. I promise this one is safe! To be able to view it, I had to go to the “Search programs and files” window, find the “Configure Java” program, go to “Security”, and add

http://gcsealgebra.uk/lebesgue/hansen

to the “Exception Site List”. It’s easy once you know what to do.

And it’s worth it, because only the ability to zoom lets you get a sense of the puny slivers that Hansen removed! One is the region XE2T here, and the other is T′C3V:

You can use this picture to help you find these regions in Philip Gibbs’ applet. But this picture is not in scale! In fact the smaller region, T′C3V, has length <semantics>3.710 7<annotation encoding="application/x-tex">3.7 \cdot 10^{-7}</annotation></semantics> and maximum width <semantics>1.410 14<annotation encoding="application/x-tex">1.4 \cdot 10^{-14}</annotation></semantics>, tapering down to a very sharp point.

That’s about a few atoms wide if you draw the whole hexagon on paper! And it’s about 30 million times longer than it is wide. This is the sort of thing you can only draw with the help of a computer.

Anyway, Hansen’s best universal covering had an area of

<semantics>0.844137708416<annotation encoding="application/x-tex"> 0.844137708416 \dots </annotation></semantics>

This tiny improvement over Sprague’s work led Klee and Wagon to write:

it does seem safe to guess that progress on [this problem], which has been painfully slow in the past, may be even more painfully slow in the future.

However, our new universal covering removes about a million times more area than Hansen’s larger region: a whopping <semantics>2.23310 5<annotation encoding="application/x-tex">2.233 \cdot 10^{-5}</annotation></semantics>. So, we get a universal covering with area

<semantics>0.844115376859<annotation encoding="application/x-tex"> 0.844115376859 \dots </annotation></semantics>

The key is to slightly rotate the dodecagon shown in the above pictures, and then use the ideas of Pál and Sprague.

There’s a lot of room between our number and the best lower bound on this problem, due to Brass and Sharifi:

<semantics>0.832<annotation encoding="application/x-tex"> 0.832 </annotation></semantics>

So, one way or another, we can expect a lot of progress now that computers are being brought to bear. Philip Gibbs has a heuristic computer calculation pointing toward a value of

<semantics>0.84408<annotation encoding="application/x-tex"> 0.84408 </annotation></semantics>

so perhaps that’s what we should shoot for.

Read our paper for the details! If you want to check our work, we’ll be glad to answer lots of detailed questions. We want to rotate the dodecagon by an amount that minimizes the area of the universal covering we get, so we use a program to compute the area for many choices of rotation angle:

• Philip Gibbs, Java program.

The program is not very long—please study it or write your own, in your own favorite language! The output is here:

• Philip Gibbs, Java program output.

and as explained at the end of our paper, the best rotation angle is about <semantics>1.3 <annotation encoding="application/x-tex">1.3^\circ</annotation></semantics>.

by john (baez@math.ucr.edu) at March 04, 2015 01:34 AM

March 03, 2015

Christian P. Robert - xi'an's og

Overfitting Bayesian mixture models with an unknown number of components

During my Czech vacations, Zoé van Havre, Nicole White, Judith Rousseau, and Kerrie Mengersen1 posted on arXiv a paper on overfitting mixture models to estimate the number of components. This is directly related with Judith and Kerrie’s 2011 paper and with Zoé’s PhD topic. The paper also returns to the vexing (?) issue of label switching! I very much like the paper and not only because the author are good friends!, but also because it brings a solution to an approach I briefly attempted with Marie-Anne Gruet in the early 1990’s, just before finding about the reversible jump MCMC algorithm of Peter Green at a workshop in Luminy and considering we were not going to “beat the competition”! Hence not publishing the output of our over-fitted Gibbs samplers that were nicely emptying extra components… It also brings a rebuke about a later assertion of mine’s at an ICMS workshop on mixtures, where I defended the notion that over-fitted mixtures could not be detected, a notion that was severely disputed by David McKay…

What is so fantastic in Rousseau and Mengersen (2011) is that a simple constraint on the Dirichlet prior on the mixture weights suffices to guarantee that asymptotically superfluous components will empty out and signal they are truly superfluous! The authors here cumulate the over-fitted mixture with a tempering strategy, which seems somewhat redundant, the number of extra components being a sort of temperature, but eliminates the need for fragile RJMCMC steps. Label switching is obviously even more of an issue with a larger number of components and identifying empty components seems to require a lack of label switching for some components to remain empty!

When reading through the paper, I came upon the condition that only the priors of the weights are allowed to vary between temperatures. Distinguishing the weights from the other parameters does make perfect sense, as some representations of a mixture work without those weights. Still I feel a bit uncertain about the fixed prior constraint, even though I can see the rationale in not allowing for complete freedom in picking those priors. More fundamentally, I am less and less happy with independent identical or exchangeable priors on the components.

Our own recent experience with almost zero weights mixtures (and with Judith, Kaniav, and Kerrie) suggests not using solely a Gibbs sampler there as it shows poor mixing. And even poorer label switching. The current paper does not seem to meet the same difficulties, maybe thanks to (prior) tempering.

The paper proposes a strategy called Zswitch to resolve label switching, which amounts to identify a MAP for each possible number of components and a subsequent relabelling. Even though I do not entirely understand the way the permutation is constructed. I wonder in particular at the cost of the relabelling.


Filed under: Statistics Tagged: component of a mixture, Czech Republic, Gibbs sampling, label switching, Luminy, mixture estimation, Peter Green, reversible jump, unknown number of components

by xi'an at March 03, 2015 11:15 PM

Emily Lakdawalla - The Planetary Society Blog

Watch Ceres rotate: A guide to interpreting Dawn's images
NASA held a press briefing on the Dawn mission yesterday, sharing some new images and early interpretations of them. I see lots of things that intrigue me, and I'm looking forward to Dawn investigating them in more detail. I invite you to check out these photos yourself, and offer you some guidance on things to look for.

March 03, 2015 08:55 PM

astrobites - astro-ph reader's digest

A new tool for hunting exoplanetary rings
  • A Novel Method for identifying Exoplanetary Rings
  • Authors: J. I. Zuluaga, D. M. Kipping, M. Sucerquia and J. A. Alvarado
  • First Author’s Institutions: 1) Harvard-Smithsonian Center for Astrophysics, 2) FACom – Instituto de Fisica – FCEN, Universidad de Antioquia, Colombia,  3) Fulbright Visitor Scholar

Today’s question: Do you like rings?

Let’s start this Astrobite a bit different than usual. Before you read on, please click here and tell us about your favorite planet… Are you done? Good! The reason why I’m asking has to do with the ring structure around Saturn. Assuming you like Saturn’s rings, you are probably also curious whether exoplanets reveal ring structures, too, and how those can be detected. The answer to the first question is ‘Yes!’ and if you like Saturn, you probably fell in love with this planet. As Ruth told you, the authors put a lot of work into finding an explanation for the observed profile by comparing different transit profiles before they concluded that the planet hosts circumplanetary rings. That’s the point where today’s authors come into play. They present a model, which simplifies the detection of exorings.

exoring_transit_sketch

Figure 1: Sketch of a transit of a planet with rings (top) and the corresponding schematic illustration of the observed flux (bottom). T_{14} corresponds to the entire transiting interval of the planet with/without rings, while T_{23} corresponds to the time interval of the full transit of the planet with/without rings. The figure corresponds to Fig. 1 in the letter.

Characteristic features of planets hosting circumplanetary rings

Figure 1 illustrates their underlying thoughts. The yellow area represents the host star and I guess it is easy to spot the planet with its circumplanetary rings moving from left to right. When the planet moves together with the rings to the right of position x_3 (when it starts its egress), one side of the rings will not hide the light of the host star anymore and thus the observed flux from the host star slowly increases. The planet itself leaves the transiting area a bit later, which then leads to a steeper increase in flux until the planet does not hide any of the light anymore. At that point the increase in flux is shallower again due to the other side of the ring structure, which still covers part of the light from the host star before this side also stops transiting at position x_4. When the planet and its rings start transiting (the ingress), the process is reversed. The illuminating area of the star gets more hidden and the flux decreases. Principally there are two different intervals for a transit of a planet:

  • The time from the entering of the planet in the transiting region until the time the planet does not cover any light of the host anymore (corresponding to T_{14} or just “transit”).
  • The time in which the entire planet covers the light of the host (corresponding to T_{23} or “full transit”).

Considering now also the rings, four time intervals exist, namely T_{14,ring}, T_{14,planet}, T_{23,planet} and T_{23,ring} (as shown in the graph of Fig. 1). The relative flux difference during the transit of the planet with and without the rings compared to the unperturbed flux of the star is called transit depth (\delta=(F-F_0)/F_0). In practice, the slopes corresponding to the ingress/egress of the ring and the ingress/egress of the planet are difficult to distinguish from each other. The authors stress that exoplanets with rings could be mistakenly interpreted as ringless planets. Assuming the star and the planet as being spherical and a uniform shape of the rings, the transit depth simply becomes the ratio of the area hidden by the planet including the rings and the projected surface area of the star (\delta=A_{rp}/A_{\ast}). If the rings’ plane is perpendicular to the orbiting direction and the observed transit depth of the ring is interpreted as the transit depth of a ringless planet, the overestimated radius of the planet leads to an underestimation of the planetary density (as shown in Fig. 2).

Figure 2: Illustration of the effect of the projected inclination of the of the rings on the ratio of observed to true planetary ratio. The degree of inclination is illustrated by the black dots and their surrounding rings. The different colors represent different transit depths. The figure corresponds to the upper panel in Fig. 2 of the letter.

Figure 2: Illustration of the effect of the projected inclination of the of the rings on the ratio of observed to true planetary ratio. The degree of inclination is illustrated by the black dots and their surrounding rings. The different colors represent different transit depths. The figure corresponds to the upper panel in Fig. 2 of the letter.

Additionally, the authors repeat the derivation that stellar density \rho_{\ast} is proportional to \delta_{0.75} = (T_{14}^2-T_{23}^2)^{-1.5} reveals another potential misinterpretation. Figure 3 illustrates the effect on stellar density of different ring inclinations. In the case of a ring plane perpendicular to the orbital direction, the stellar density would be overestimated. However, in the more common case of alignment of the rings’ plane with the orbiting plane, the increased difference T_{14}-T_{23} leads to an underestimation of the stellar density.

A publicly available code allows for hunting 

Taking into account the described phenomena of anomalous depth and the photo-ring effect to estimate probability distribution functions for the occurrence of the effects, the authors developed a computer code, which you can use to go out hunting for exoring candidates! They suggest that you focus on planets (candidates) with low densities and use their publicly available code (http://github.org/facom/exorings) to do so. But here’s a disclaimer: The code can only find candidates. To confirm their existence you still need to do a complex fit of the light curve. That’s something the code cannot do for you.

stellar_density_plot

Figure 3: Illustration of the photo-ring effect. The color scale displays the relative difference of observed radiation to stellar radiation. The two axis represent the tilt of the rings with respect to the two planets. Note that the y-axis corresponds to the actual angle, while the projected inclination on the x-axis is a cosine. The black crosses represent a sub-sample of observed transits with low obliquity. The figure corresponds to the upper panel of Fig. 3 in the letter.

 

 

 

 

by Michael Küffmeier at March 03, 2015 07:44 PM

ZapperZ - Physics and Physicists

Two Quantum Properties Teleported Simultaneously
People all over the net are going ga-ga over the report on the imaging of the wave-particle behavior of light at the same time. I, on the other hand, am more fascinated by the report that two different quantum properties have been teleported simultaneously for the very first time.

The values of two inherent properties of one photon – its spin and its orbital angular momentum – have been transferred via quantum teleportation onto another photon for the first time by physicists in China. Previous experiments have managed to teleport a single property, but scaling that up to two properties proved to be a difficult task, which has only now been achieved. The team's work is a crucial step forward in improving our understanding of the fundamentals of quantum mechanics and the result could also play an important role in the development of quantum communications and quantum computers. 

 See if you can view the actual Nature paper here. I'm not sure how long the free access will last.

Zz.

by ZapperZ (noreply@blogger.com) at March 03, 2015 04:48 PM

Quantum Diaries

Detecting something with nothing

This article appeared in Fermilab Today on March 3, 2015.

From left: Jason Bono (Rice University), Dan Ambrose (University of Minnesota) and Richie Bonventre (Lawrence Berkeley National Laboratory) work on the Mu2e straw chamber tracker unit at Lab 3. Photo: Reidar Hahn

From left: Jason Bono (Rice University), Dan Ambrose (University of Minnesota) and Richie Bonventre (Lawrence Berkeley National Laboratory) work on the Mu2e straw chamber tracker unit at Lab 3. Photo: Reidar Hahn

Researchers are one step closer to finding new physics with the completion of a harp-shaped prototype detector element for the Mu2e experiment.

Mu2e will look for the conversion of a muon to only an electron (with no other particles emitted) — something predicted but never before seen. This experiment will help scientists better understand how these heavy cousins of the electron decay. A successful sighting would bring us nearer to a unifying theory of the four forces of nature.

The experiment will be 10,000 times as sensitive as other experiments looking for this conversion, and a crucial part is the detector that will track the whizzing electrons. Researchers want to find one whose sole signature is its energy of 105 MeV, indicating that it is the product of the elusive muon decay.

In order to measure the electron, scientists track the helical path it takes through the detector. But there’s a catch. Every interaction with detector material skews the path of the electron slightly, disturbing the measurement. The challenge for Mu2e designers is thus to make a detector with as little material as possible, says Mu2e scientist Vadim Rusu.

“You want to detect the electron with nothing — and this is as close to nothing as we can get,” he said.

So how to detect the invisible using as little as possible? That’s where the Mu2e tracker design comes in. Panels made of thin straws of metalized Mylar, each only 15 microns thick, will sit inside a cylindrical magnet. Rusu says that these are the thinnest straws that people have ever used in a particle physics experiment.

These straws, filled with a combination of argon and carbon dioxide gas and threaded with a thin wire, will wait in vacuum for the electrons. Circuit boards placed on both ends of the straws will gather the electrical signal produced when electrons hit the gas inside the straw. Scientists will measure the arrival times at each end of the wire to help accurately plot the electron’s overall trajectory.

“This is another tricky thing that very few have attempted in the past,” Rusu said.

The group working on the Mu2e tracker electronics have also created the tiny, low-power circuit boards that will sit at the end of each straw. With limited space to run cooling lines, necessary features that whisk away heat that would otherwise sit in the vacuum, the electronics needed to be as cool and small as possible.

“We actually spent a lot of time designing very low-power electronics,” Rusu said.

This first prototype, which researchers began putting together in October, gives scientists a chance to work out kinks, improve design and assembly procedures, and develop the necessary components.

One lesson already learned? Machining curved metal with elongated holes that can properly hold the straws is difficult and expensive. The solution? Using 3-D printing to make a high-tech, transparent plastic version instead.

Researchers also came up with a system to properly stretch the straws into place. While running a current through the straw, they use a magnet to pluck the straw — just like strumming a guitar string — and measure the vibration. This lets them set the proper tension that will keep the straw straight throughout the lifetime of the experiment.

Although the first prototype of the tracker is complete, scientists are already hard at work on a second version (using the 3D-printed plastic), which should be ready in June or July. The prototype will then be tested for leaks and to see if the electronics pick up and transmit signals properly.

A recent review of Mu2e went well, and Rusu expects work on the tracker construction to begin in 2016.

Lauren Biron

by Fermilab at March 03, 2015 03:22 PM

Georg von Hippel - Life on the lattice

QNP 2015, Day One
Hello from Valparaíso, where I continue this year's hectic conference circuit at the 7th International Conference on Quarks and Nuclear Physics (QNP 2015). Except for some minor inconveniences and misunderstandings, the long trip to Valparaíso (via Madrid and Santiago de Chile) went quite smoothly, and so far, I have found Chile a country of bright sunlight and extraordinarily helpful and friendly people.

The first speaker of the conference was Emanuele Nocera, who reviewed nucleon and nuclear parton distributions. The study of parton distributions become necessary because hadrons are really composed not simply of valence quarks, as the quark model would have it, but of an indefinite number of (sea) quarks, antiquarks and gluons, any of which can contribute to the overall momentum and spin of the hadron. In an operator product expansion framework, hadronic scattering amplitudes can then be factorised into Wilson coefficients containing short-distance (perturbative) physics and parton distribution functions containing long-distance (non-perturbative) physics. The evolution of the parton distribution functions (PDFs) with the momentum scale is given by the DGLAP equations containing the perturbatively accessible splitting functions. The PDFs are subject to a number of theoretical constraints, of which the sum rules for the total hadronic momentum and valence quark content are the most prominent. For nuclei, on can assume that a similar factorisation as for hadrons still holds, and that the nuclear PDFs are linear combinations of nucleon PDFs modified by multiplication with a binding factor; however, nuclei exhibit correlations between nucleons, which are not well-described in such an approach. Combining all available data from different sources, global fits to PDFs can be performed using either a standard χ2 fit with a suitable model, or a neural network description. There are far more and better data on nucleon than nuclear PDFs, and for nucleons the amount and quality of the data also differs between unpolarised and polarised PDFs, which are needed to elucidate the "proton spin puzzle".

Next was the first lattice talk of the meeting, given by Huey-Wen Lin, who gave a review of the progress in lattice studies of nucleon structure. I think Huey-Wen gave a very nice example by comparing the computational and algorithmic progress with that in videogames (I'm not an expert there, but I think the examples shown were screenshots of Nethack versus some modern first-person shooter), and went on to explain the importance of controlling all systematic errors, in particular excited-state effects, before reviewing recent results on the tensor, scalar and axial charges and the electromagnetic form factors of the nucleon. As an outlook towards the current frontier, she presented the inclusion of disconnected diagrams and a new idea of obtaining PDFs from the lattice more directly rather than through their moments.

The next speaker was Robert D. McKeown with a review of JLab's Nuclear Science Programme. The CEBAF accelerator has been upgraded to 12 GeV, and a number of experiments (GlueX to search for gluonic excitations, MOLLER to study parity violation in Møller scattering, and SoLID to study SIDIS and PVDIS) are ready to be launched. A number of the planned experiments will be active in areas that I know are also under investigation by experimental colleagues in Mainz, such as a search for the "dark photon" and a study of the running of the Weinberg angle. Longer-term plans at JLab include the design of an electron-ion collider.

After a rather nice lunch, Tomofumi Nagae spoke about the hadron physics programme an J-PARC. In spite of major setbacks by the big earthquake and a later radiation accident, progress is being made. A search for the Θ+ pentaquark did not find a signal (which I personally do not find surprising, since the whole pentaquark episode is probably of more immediate long-term interest to historians and sociologists of science than to particle physicists), but could not completely exclude all of the discovery claims.

This was followed by a take by Jonathan Miller of the MINERνA collaboration presenting their programme of probing nuclei with neutrinos. Major complications include the limited knowledge of the incoming neutrino flux and the fact that final-state interactions on the nuclear side may lead to one process mimicking another one, making the modelling in event generators a key ingredient of understanding the data.

Next was a talk about short-range correlations in nuclei by Or Henn. Nucleons subject to short-range correlations must have high relative momenta, but a low center-of-mass momentum. The experimental studies are based on kicking a proton out of a nucleus with an electron, such that both the momentum transfer (from the incoming and outgoing electron) and the final momentum of the proton are known, and looking for a nucleon with a momentum close to minus the difference between those two (which must be the initial momentum of the knocked-out proton) coming out. The astonishing result is that at high momenta, neutron-proton pairs dominate (meaning that protons, being the minority, have a much larger chance of having high momenta) and are linked by a tensor force. Similar results are known from other two-component Fermi systems, such as ultracold atomic gases (which are of course many, many orders of magnitude less dense than nuclei).

After the coffee break, Heinz Clement spoke about dibaryons, specifically about the recently discovered d*(2380) resonance, which taking all experimental results into account may be interpreted as a ΔΔ bound state

The last talk of the day was by André Walker-Loud, who reviewed the study of nucleon-nucleon interactions and nuclear structure on the lattice, starting with a very nice review of the motivations behind such studies, namely the facts that big-bang nucleosynthesis is very strongly dependent on the deuterium binding energy and the proton-neutron mass difference, and this fine-tuning problem needs to be understood from first principles. Besides, currently the best chance for discovering BSM physics seems once more to lie with low-energy high-precision experiments, and dark matter searches require good knowledge of nuclear structure to control their systematics. Scattering phase shifts are being studied through the Lüscher formula. Current state-of-the-art studies of bound multi-hadron systems are related to dibaryons, in particular the question of the existence of the H-dibaryon at the physical pion mass (note that the dineutron, certainly unbound in the real world, becomes bound at heavy enough pion masses), and three- and four-nucleon systems are beginning to become treatable, although the signal-to-noise problem gets worse as more baryons are added to a correlation function, and the number of contractions grows rapidly. Going beyond masses and binding energies, the new California Lattice Collaboration (CalLat) has preliminary results for hadronic parity violation in the two-nucleon system, albeit at a pion mass of 800 MeV.

by Georg v. Hippel (noreply@blogger.com) at March 03, 2015 02:38 PM

Clifford V. Johnson - Asymptotia

dublab at LAIH
LAIH_Mark_McNeill_27th_Feb_2015 (Click for larger view.) Mark ("Frosty") McNeill gave us a great overview of the work of the dublab collective at last Friday's LAIH luncheon. As I said in my introduction:
... dublab shows up as part of the DNA of many of the most engaging live events around the City (at MOCA, LACMA, Barnsdall, the Hammer, the Getty, the Natural History Museum, the Hollywood Bowl… and so on), and dublab is available in its core form as a radio project any time you like if you want to listen online. [...] dublab is a "non-profit web radio collective devoted to the growth of  positive music, arts and culture."
Frosty is a co-founder of dublab, and he told us a bit about its history, activities, and their new wonderful project called "Sound Share LA" which will be launching soon: They are creating a multimedia archive of Los Angeles based [...] Click to continue reading this post

by Clifford at March 03, 2015 02:06 PM

Symmetrybreaking - Fermilab/SLAC

A telescope that tells you when to look up

The LSST system will alert scientists to changes in space in near-real time.

A massive digital camera will begin taking detailed snapshots from a mountaintop telescope in Chile in 2021. In just a few nights, the Large Synoptic Survey Telescope will amass more data than the Hubble Space Telescope gathered in its first 20 years of operation.

This unprecedented stream of images will trigger up to 10 million automated alerts each night, an average of about 10,000 per minute. The alerts will point out objects that appear to be changing in brightness, color or position—candidates for fast follow-up viewing using other telescopes.

To be ready for this astronomical flood of data, scientists are already working out the details of how to design the alert system to be widely and rapidly accessible.

“The number of alerts is far more than humans can filter manually,” says Jeff Kantor, LSST Data Management project manager. “Automated filters will be required to pick out the alerts of interest for any given scientist or project.”

The alerts will provide information on the properties of newly discovered asteroids, supernovae, gamma-ray bursts, galaxies and stars with variable brightness, and other short-lived phenomena, Kantor says.

The alerts could come in the form of emails or other notifications on the Web or smartphone apps—and could be made accessible to citizen scientists as well, says Aaron Roodman, a SLAC National Accelerator Laboratory scientist working on the LSST camera.

Artwork by: Sandbox Studio, Chicago

“I think there actually will be fantastic citizen science coming out of this,” he says. “I think it will be possible for citizen scientists to create unique filters, identify new objects—I think it’s ripe for that. I think the immediacy of the data will be great.”

LSST’s camera will produce images with 400 times more pixels than those produced by the camera in the latest-model iPhone. It is designed to capture pairs of 15-second exposures before moving to the next position, recording subtle changes between the paired images and comparing them to previous images taken at the same position. LSST will cover the entire visible sky twice a week.

Alerts will be generated within about a minute of each snapshot, which is good news for people interested in studying ephemeral phenomena such as supernovae, says Alex Kim, a scientist at Lawrence Berkeley National Laboratory who is a member of the LSST collaboration.

“The very first light that you get from a supernova—that sharp flash—only lasts from minutes to days,” Kim says. “It’s very important to have an immediate response before that flash disappears.”

The alerts will be distributed by a common astronomical alert system like today’s Virtual Observatory Event distribution networks, says SLAC scientist Kian-Tat Lim, LSST data management system architect.

CalTech scientist Ashish Mahabal, a co-chair of the LSST transients and variables science working group, says that the alerts system will need to be ready well before LSST construction is complete. It will be tested through simulations and could borrow from alert systems designed for other surveys.

The system that analyzes images to generate the LSST alerts will need to be capable of making about 40 trillion calculations per second. Mahabal says a basic system will likely be in place in the next two or three years.

 

Like what you see? Sign up for a free subscription to symmetry!

by Glenn Roberts Jr. at March 03, 2015 02:00 PM

CERN Bulletin

Lecture | CERN prepares its long-term future: a 100-km circular collider to follow the LHC? | CERN Globe | 11 March
Particle physics is a long-term field of research: the LHC was originally conceived in the 1980s, but did not start running until 25 years later. An accelerator unlike any other, it is now just at the start of a programme that is set to run for another 20 years.   Frédérick Bordry. While the LHC programme is already well defined for the next two decades, it is now time to look even further ahead, and so CERN is initiating an exploratory study for a future long-term project centred on a next-generation circular collider with a circumference of 80 to 100 kilometres. A worthy successor to the LHC, whose collision energies will reach 13 TeV in 2015, such an accelerator would allow particle physicists to push the boundaries of knowledge even further. The Future Circular Collider (FCC) programme will focus especially on studies for a hadron collider, like the LHC, capable of reaching unprecedented energies in the region of 100 TeV. Opening with an introduction to the LHC and its physics programme, this lecture will then focus on the feasibility of designing, building and operating a machine approaching 100 km in length and the biggest challenges that this would pose, as well as the different options for such a machine (proton-proton, electron-positron or electron-proton collisions). Lecture in French, accompanied by slides in English. 18:30-19:30: talk: CERN prepares its future:a 100-km circular collider to follow the LHC? 19:30- 20:00 Questions and Answers Speaker: Frederick Bordry, CERN Director for Accelerators and Technology Entrance is free, but registration is mandatory: http://iyl.eventbrite.com As Director for Accelerators and Technology, Frédérick Bordry is in charge of the operation of the whole CERN accelerator complex, with a special focus on the LHC (Large Hadron Collider), and the development of post-LHC projects and technologies. He is a graduate of the École Nationale Supérieure d’Électronique, d’Électrotechnique, d’Informatique et d’Hydraulique de Toulouse (ENSEEIHT) and earned the titles of docteur-ingénieur and docteur ès sciences at the Institut National Polytechnique de Toulouse (INPT). He worked for ten years as a teaching researcher in both those institutes and later held a professorship for two years at the Université Fédérale de Santa Catarina, Florianópolis, Brazil (1979-1981). Since joining CERN in 1986, he has fulfilled several roles, most notably in accelerator design and energy conversion. Always a strong believer in the importance of international exchange in culture, politics and science, he has devoted time to reflecting on issues relating to education, research and multilingualism. He is also convinced of the importance of pooling financial and human resources, especially at the European level.

March 03, 2015 11:15 AM

Tommaso Dorigo - Scientificblogging

Recent Results From Super-Kamiokande

(The XVIth edition of "Neutrino Telescopes" is going on in Venice this week. The writeup below is from a talk by M.Nakahata at the morning session today. For more on the conference and the results shown and discussed there, see the conference blog.)

read more

by Tommaso Dorigo at March 03, 2015 10:20 AM

astrobites - astro-ph reader's digest

How did the Universe cool over time?

Title: Constraining the redshift evolution of the Cosmic Microwave Background black-body temperature with PLANCK data
Authors: I. Martino et al.
First Author’s Institution: Fisica Teorica, Universidad de Salamanca

While numerous cosmological models have been proposed to describe the early history and evolution of the Universe, the Big Bang model is by far in the best agreement with current observations. Different cosmological models predict different behaviors for the temperature evolution of the cosmic microwave background over time, and the Big Bang model predicts that the CMB should cool adiabatically via expansion (i.e. without any the addition or removal of any heat). We can describe the “overall” temperature of the Universe by measuring the blackbody spectrum of the cosmic microwave background, as it is believed that the early Universe was in thermal equlibrium (i.e. same temperature) as these CMB photons. By measuring the CMB temperature evolution over a range of redshifts (or equivalently, over cosmic history), we can test the consistency of the prevailing Big Bang model and search for any deviations from adiabatic expansion that might suggest new additions to our model.

In this paper, the authors combine CMB data from the Planck mission and previously collected X-ray observations of galaxy clusters to obtain constraints on this adiabatic temperature change. Instead of measuring the CMB directly, the authors make use of the thermal Sunyaev-Zeldovich (tSZ) effect. This effect causes high energy electrons from the intracluster gas contained within a galaxy cluster to boost the energy of CMB photons through electron-photon scattering. The authors use Planck data to subtract the background CMB signal from the X-ray emission of the galaxy clusters to measure the boost in CMB photon energy induced by these clusters. These galaxy clusters have known redshifts, and this redshift data combined with CMB temperature measurements yield the history of CMB temperature changes over time.

The aim of this study is to measure the CMB temperature evolution over time, but this is indirectly done by first measuring the tSZ effect of galaxy clusters at various redshifts. The authors measure the CMB temperature shifts caused by the tSZ effect by subtracting the CMB background, modeling the frequency-dependence of the tSZ effect, and and selecting the best fitting model to the cluster. This tSZ effect measurement then yields the CMB temperature at a certain redshift. The resulting temperature evolution measurements are consistent with the CMB temperature evolving adiabatically over time, and are consistent with previous attempts to quantify this adiabatic cooling. Fig. 1 plots the redshift evolution of the inferred CMB temperature (scaled with respect to redshift and the current CMB temperature). The shaded region and blue points represent the measurements done in this paper, and this is consistent with the hotizontal red line representing adiabatic evolution.

Fig. 1:

Fig. 1: CMB temperature (normalized with respect to redshift and present-day CMB temperature) plotted against redshift. The blue square points and shaded regions correspond to the measurements performed in this paper, which are in good agreement with the horizontal red line representing adiabatic cooling. The black dots are previous measurements done by a previous study.

While this agreement is perhaps not too surprising given the spectacularly good predictions made by the Big Bang model, this kind of consistency check is important for maintaining our confidence in the Big Bang model and ruling out other potential cosmologies. For further validation, the authors plan to continue this analysis by expanding their sample of galaxy clusters to higher redshifts for additional consistency checks.

by Anson Lam at March 03, 2015 07:46 AM

March 02, 2015

Christian P. Robert - xi'an's og

Is Jeffreys’ prior unique?

“A striking characterisation showing the central importance of Fisher’s information in a differential framework is due to Cencov (1972), who shows that it is the only invariant Riemannian metric under symmetry conditions.” N. Polson, PhD Thesis, University of Nottingham, 1988

Following a discussion on Cross Validated, I wonder whether or not the affirmation that Jeffreys’ prior was the only prior construction rule that remains invariant under arbitrary (if smooth enough) reparameterisation. In the discussion, Paulo Marques mentioned Nikolaj Nikolaevič Čencov’s book, Statistical Decision Rules and Optimal Inference, Russian book from 1972, of which I had not heard previously and which seems too theoretical [from Paulo’s comments] to explain why this rule would be the sole one. As I kept looking for Čencov’s references on the Web, I found Nick Polson’s thesis and the above quote. So maybe Nick could tell us more!

However, my uncertainty about the uniqueness of Jeffreys’ rule stems from the fact that, f I decide on a favourite or reference parametrisation—as Jeffreys indirectly does when selecting the parametrisation associated with a constant Fisher information—and on a prior derivation from the sampling distribution for this parametrisation, I have derived a parametrisation invariant principle. Possibly silly and uninteresting from a Bayesian viewpoint but nonetheless invariant.


Filed under: Books, Statistics, University life Tagged: cross validated, Harold Jeffreys, Jeffreys priors, NIck Polson, Nikolaj Nikolaevič Čencov, Russian mathematicians

by xi'an at March 02, 2015 11:15 PM

Emily Lakdawalla - The Planetary Society Blog

Understanding why our most Earth-like neighbor, Venus, is so different
Van Kane introduces us to EnVision—a proposed European mission to help improve our understanding of Venus.

March 02, 2015 06:43 PM

Lubos Motl - string vacua and pheno

Brian Greene's 14-hour audiobook
I admit that I have never bought an audiobook. And I don't even know what kind of devices or apps are able to play them. But most of us are able to play YouTube videos. And a user who is either a friend of Brian Greene or a pirate posted the full audiobook of "The Hidden Reality" to YouTube a month ago.




If you have spare 13 hours and 49 minutes today (and tomorrow, not to mention the day after tomorrow), here is the first 8.3 hours:



And when you complete this one, you should continue.




Here are the remaining 5.5 hours:



If you haven't tried the videos yet, you may be puzzled: Who can be narrator? Where in the world can you find someone who is able to read and flawlessly speak for 14 hours, without any need to eat, drink, use a toilet, or breath in between? Actors don't have these physical abilities, have they?

Maybe the actors don't but some physicists do. Brian Greene has recorded the audiobook version of "The Hidden Reality" himself!

If you are as impressed as I am, you may go to the Amazon.com page at the top and register for a free 30-day trial of the "Audible" program which allows you to download two books for free. After 30 days, you may continue with "Audible" at $14.95 a month.

Alternatively, you will always be able to buy the audiobooks individually. "The Hidden Reality" audiobook is available via one click for $26.95.

I am amazed by this because whenever I record something, like my versions of songs or anything else, it takes at least 10 times more time to produce the thing than the final duration of the audio file. My guess is that it would take me something like 140 hours to record that and the quality wouldn't be anywhere close to Brian's recitation (not even in Czech).

by Luboš Motl (noreply@blogger.com) at March 02, 2015 06:42 PM

Sean Carroll - Preposterous Universe

Guest Post: An Interview with Jamie Bock of BICEP2

Jamie Bock If you’re reading this you probably know about the BICEP2 experiment, a radio telescope at the South Pole that measured a particular polarization signal known as “B-modes” in the cosmic microwaves background radiation. Cosmologists were very excited at the prospect that the B-modes were the imprint of gravitational waves originating from a period of inflation in the primordial universe; now, with more data from the Planck satellite, it seems plausible that the signal is mostly due to dust in our own galaxy. The measurements that the team reported were completely on-target, but our interpretation of them has changed — we’re still looking for direct evidence for or against inflation.

Here I’m very happy to publish an interview that was carried out with Jamie Bock, a professor of physics at Caltech and a senior research scientist at JPL, who is one of the leaders of the BICEP2 collaboration. It’s a unique look inside the workings of an incredibly challenging scientific effort.


New Results from BICEP2: An Interview with Jamie Bock

What does the new data from Planck tell you? What do you know now?

A scientific race has been under way for more than a decade among a dozen or so experiments trying to measure B-mode polarization, a telltale signature of gravitational waves produced from the time of inflation. Last March, BICEP2 reported a B-mode polarization signal, a twisty polarization pattern measured in a small patch of sky. The amplitude of the signal we measured was surprisingly large, exceeding what we expected for galactic emission. This implied we were seeing a large gravitational wave signal from inflation.

We ruled out galactic synchrotron emission, which comes from electrons spiraling in the magnetic field of the galaxy, using low-frequency data from the WMAP [Wilkinson Microwave Anisotropy Probe] satellite. But there were no data available on polarized galactic dust emission, and we had to use models. These models weren’t starting from zero; they were built on well-known maps of unpolarized dust emission, and, by and large, they predicted that polarized dust emission was a minor constituent of the total signal.

Obviously, the answer here is of great importance for cosmology, and we have always wanted a direct test of galactic emission using data in the same piece of sky so that we can test how much of the BICEP2 signal is cosmological, representing gravitational waves from inflation, and how much is from galactic dust. We did exactly that with galactic synchrotron emission from WMAP because the data were public. But with galactic dust emission, we were stuck, so we initiated a collaboration with the Planck satellite team to estimate and subtract polarized dust emission. Planck has the world’s best data on polarized emission from galactic dust, measured over the entire sky in multiple spectral bands. However, the polarized dust maps were only recently released.

On the other side, BICEP2 gives us the highest-sensitivity data available at 150 GHz to measure the CMB. Interestingly, the two measurements are stronger in combination. We get a big boost in sensitivity by putting them together. Also, the detectors for both projects were designed, built, and tested at Caltech and JPL, so I had a personal interest in seeing that these projects worked together. I’m glad to say the teams worked efficiently and harmoniously together.

What we found is that when we subtract the galaxy, we just see noise; no signal from the CMB is detectable. Formally we can say at least 40 percent of the total BICEP2 signal is dust and less than 60 percent is from inflation.

How do these new data shape your next steps in exploring the earliest moments of the universe?

It is the best we can do right now, but unfortunately the result with Planck is not a very strong test of a possible gravitational wave signal. This is because the process of subtracting galactic emission effectively adds more noise into the analysis, and that noise limits our conclusions. While the inflationary signal is less than 60 percent of the total, that is not terribly informative, leaving many open questions. For example, it is quite possible that the noise prevents us from seeing part of the signal that is cosmological. It is also possible that all of the BICEP2 signal comes from the galaxy. Unfortunately, we cannot say more because the data are simply not precise enough. Our ability to measure polarized galactic dust emission in particular is frustratingly limited.

Figure 1:  Maps of CMB polarization produced by BICEP2 and Keck Array.  The maps show the  ‘E-mode’ polarization pattern, a signal from density variations in the CMB, not gravitational  waves.  The polarization is given by the length and direction of the lines, with a coloring to better  show the sign and amplitude of the E-mode signal.  The tapering toward the edges of the map is  a result of how the instruments observed this region of sky.  While the E-mode pattern is about 6  times brighter than the B-mode signal, it is still quite faint.  Tiny variations of only 1 millionth of  a degree kelvin are faithfully reproduced across these multiple measurements at 150 GHz, and in  new Keck data at 95 GHz still under analysis.  The very slight color shift visible between 150  and 95 GHz is due to the change in the beam size.

Figure 1: Maps of CMB polarization produced by BICEP2 and Keck Array.  The maps show the
‘E-mode’ polarization pattern, a signal from density variations in the CMB, not gravitational
waves.  The polarization is given by the length and direction of the lines, with a coloring to better
show the sign and amplitude of the E-mode signal.  The tapering toward the edges of the map is
a result of how the instruments observed this region of sky.  While the E-mode pattern is about 6
times brighter than the B-mode signal, it is still quite faint.  Tiny variations of only 1 millionth of
a degree kelvin are faithfully reproduced across these multiple measurements at 150 GHz, and in
new Keck data at 95 GHz still under analysis. The very slight color shift visible between 150
and 95 GHz is due to the change in the beam size.

However, there is good news to report. In this analysis, we added new data obtained in 2012–13 from the Keck Array, an instrument with five telescopes and the successor to BICEP2 (see Fig. 1). These data are at the same frequency band as BICEP2—150 GHz—so while they don’t help subtract the galaxy, they do increase the total sensitivity. The Keck Array clearly detects the same signal detected by BICEP2. In fact, every test we can do shows the two are quite consistent, which demonstrates that we are doing these difficult measurements correctly (see Fig. 2). The BICEP2/Keck maps are also the best ever made, with enough sensitivity to detect signals that are a tiny fraction of the total.

A power spectrum of the B-mode polarization signal that plots the strength of the signal as a function of angular frequency.  The data show a signal significantly above what is expected for a universe without gravitational waves, given by the red line.  The excess peaks at angular scales of about 2 degrees.  The independent measurements of BICEP2 and Keck Array shown in red and blue are consistent within the errors, and their combination is shown in black.  Note the sets of points are slightly shifted along the x-axis to avoid overlaps.

Figure 2: A power spectrum of the B-mode polarization signal that plots the strength of the signal as a function of angular frequency. The data show a signal significantly above what is expected for a universe without gravitational waves, given by the red line. The excess peaks at angular scales of about 2 degrees. The independent measurements of BICEP2 and Keck Array shown in red and blue are consistent within the errors, and their combination is shown in black. Note the sets of points are slightly shifted along the x-axis to avoid overlaps.

In addition, Planck’s measurements over the whole sky show the polarized dust is fairly well behaved. For example, the polarized dust has nearly the same spectrum across the sky, so there is every reason to expect we can measure and remove dust cleanly.

To better subtract the galaxy, we need better data. We aren’t going to get more data from Planck because the mission has finished. The best way is to measure the dust ourselves by adding new spectral bands to our own instruments. We are well along in this process already. We added a second band to the Keck Array last year at 95 GHz and a third band this year at 220 GHz. We just installed the new BICEP3 instrument at 95 GHz at the South Pole (see Fig. 3). BICEP3 is single telescope that will soon be as powerful as all five Keck Array telescopes put together. At 95 GHz, Keck and BICEP3 should surpass BICEP2’s 150 GHz sensitivity by the end of this year, and the two will be a very powerful combination indeed. If we switch the Keck Array entirely over to 220 GHz starting next year, we can get a third band to a similar depth.

BICEP3 installed and carrying out calibration measurements off a reflective mirror placed above the receiver. The instrument is housed within a conical reflective ground shield to minimize the brightness contrast between the warm earth and cold space.  This picture was taken at the beginning of the winter season, with no physical access to the station for the next 8 months, when BICEP3 will conduct astronomical observations (Credit:  Sam Harrison

Figure 3: BICEP3 installed and carrying out calibration measurements off a reflective mirror placed above the receiver. The instrument is housed within a conical reflective ground shield to minimize the brightness contrast between the warm earth and cold space. This picture was taken at the beginning of the winter season, with no physical access to the station for the next 8 months, when BICEP3 will conduct astronomical observations (Credit: Sam Harrison)

Finally, this January the SPIDER balloon experiment, which is also searching the CMB for evidence of inflation, completed its first flight, outfitted with comparable sensitivity at 95 and 150 GHz. Because SPIDER floats above the atmosphere (see Fig. 4), we can also measure the sky on larger spatial scales. This all adds up to make the coming years very exciting.

View of the earth and the edge of space, taken from an optical camera on the SPIDER gondola at float altitude shortly after launch. Clearly visible below is Ross Island, with volcanos Mt. Erebus and Mt. Terror and the McMurdo Antarctic base, the Royal Society mountain range to the left, and the edge of the Ross permanent ice shelf.   (Credit:  SPIDER team).

Figure 4: View of the earth and the edge of space, taken from an optical camera on the SPIDER gondola at float altitude shortly after launch. Clearly visible below is Ross Island, with volcanos Mt. Erebus and Mt. Terror and the McMurdo Antarctic base, the Royal Society mountain range to the left, and the edge of the Ross permanent ice shelf. (Credit: SPIDER team).

Why did you make the decision last March to release results? In retrospect, do you regret it?

We knew at the time that any news of a B-mode signal would cause a great stir. We started working on the BICEP2 data in 2010, and our standard for putting out the paper was that we were certain the measurements themselves were correct. It is important to point out that, throughout this episode, our measurements basically have not changed. As I said earlier, the initial BICEP2 measurement agrees with new data from the Keck Array, and both show the same signal. For all we know, the B-mode polarization signal measured by BICEP2 may contain a significant cosmological component—that’s what we need to find out.

The question really is, should we have waited until better data were available on galactic dust? Personally, I think we did the right thing. The field needed to be able to react to our data and test the results independently, as we did in our collaboration with Planck. This process hasn’t ended; it will continue with new data. Also, the searches for inflationary gravitational waves are influenced by these findings, and it is clear that all of the experiments in the field need to focus more resources on measuring the galaxy.

How confident are you that you will ultimately find conclusive evidence for primordial gravitational waves and the signature of cosmic inflation?

I don’t have an opinion about whether or not we will find a gravitational wave signal—that is why we are doing the measurement! But any result is so significant for cosmology that it has to be thoroughly tested by multiple groups. I am confident that the measurements we have made to date are robust, and the new data we need to subtract the galaxy more accurately are starting to pour forth. The immediate path forward is clear: we know how to make these measurements at 150 GHz, and we are already applying the same process to to the new frequencies. Doing the measurements ourselves also means they are uniform so we understand all of the errors, which, in the end, are just as important.

What will it mean for our understanding of the universe if you don’t find the signal?

The goal of this program is to learn how inflation happened. Inflation requires matter-energy with an unusual repulsive property in order to rapidly expand the universe. The physics are almost certainly new and exotic, at energies too high to be accessed with terrestrial particle accelerators. CMB measurements are one of the few ways to get at the inflationary physics, and we need to squeeze them for all they are worth. A gravitational wave signal is very interesting because it tells us about the physical process behind inflation. A detection of the polarization signal at a high level means that the certain models of inflation, perhaps along the lines of the models first developed, are a good explanation.

But here again is the real point: we also learn more about inflation if we can rule out polarization from gravitational waves. No detection at 5 percent or less of the total BICEP2 signal means that inflation is likely more complicated, perhaps involving multiple fields, although there are certainly other possibilities. Either way is a win, and we’ll find out more about what caused the birth of the universe 13.8 billion years ago.

Our team dedicated itself to the pursuit of inflationary polarization 15 years ago fully expecting a long and difficult journey. It is exciting, after all this work, to be at this stage where the polarization data are breaking into new ground, providing more information about gravitational waves than we learned before. The BICEP2 signal was a surprise, and its ultimate resolution is still a work in progress. The data we need to address these questions about inflation are within sight, and whatever the answers are, they are going to be interesting, so stay tuned.

by Sean Carroll at March 02, 2015 04:05 PM

Tommaso Dorigo - Scientificblogging

Francis Halzen On Cosmogenic Neutrinos
During the first afternoon session of the XVI Neutrino Telescopes conference (here is the conference blog, which contains a report of most of the lectures and posters as they are presented) Francis Halzen gave a very nice account of the discovery of cosmogenic neutrinos by the IceCube experiment, and its implications. Below I offer a writeup - apologizing to Halzen if I misinterpreted anything.

read more

by Tommaso Dorigo at March 02, 2015 04:00 PM

arXiv blog

The Curious Adventures of an Astronomer-Turned-Crowdfunder

Personal threats, legal challenges, and NASA’s objections were just a few of the hurdles Travis Metcalfe faced when he set up a crowdfunding website to help pay for his astronomical research.

If you want to name a star or buy a crater on the moon or own an acre on Mars, there are numerous websites that can help. The legal status of such “ownership” is far from clear but the services certainly allow for a little extraterrestrial fun.

March 02, 2015 03:54 PM

Quantum Diaries

30 reasons why you shouldn’t be a particle physicist

1. Some people think that physics is exciting.

(ATLAS)

(ATLAS)

2. They say “There’s nothing like the thrill of discovery”.

(ALICE Masterclass)

(ALICE Masterclass)

3. But that feeling won’t prepare you for the real world.

(CERN)

(CERN)

4. Discoveries only happen once. Do you really want to be in the room when they happen?

(CERN)

(CERN)

5. It’s not as though people queue overnight for the big discoveries.

(CERN)

(CERN)

6. CERN’s one of the biggest labs in the world. It’s like Disneyland, but for physicists.

7. The machines are among the most complex in the world.

(Francois Becler)

(Francois Becler)

8. Seriously, don’t mess with those machines.

(CERN)

(CERN)

9. They’re not even nice to look at.

(Michael Hoch, Maximilien Brice)

(Michael Hoch, Maximilien Brice)

10. The machines are so big you have to drive through the French countryside to get from one side to the other.

11. There’s nothing beautiful about the French countryside.

12. And there’s nothing cool about working on the world’s biggest computing grid with some of most powerful supercomputers ever created.

(CERN)

(CERN)

13. A dataset so big you can’t fit it all in one place? Please.

(CERN)

(CERN)

14. So you can do your analysis from anywhere in the world? Lame!

(CERN Courier)

(CERN Courier)

15. And our conferences always take place in strange places.

16. Who has time to travel?

17. Some people even take time away from the lab to go skiing.

(LHCb)

(LHCb)

18. Physicists have been working on this stuff for decades. Nobody remembers any of these people:

(Wikipedia)

(Wikipedia)

19. But particle physics is only about understanding the universe on the most fundamental level.

20. We don’t even have a well stocked library to help us when things get tough.

21. Or professors and experts to explain things to us.

22. And the public don’t care about what we do.

(CERN)

(CERN)

23. Even the press don’t pay any attention.

(Sean Treacy)

(Sean Treacy)

24. And who wants to contribute to the sum of human knowledge anyway?

(STFC)

(STFC)

25. There’s nothing exciting about being on shift in the Control Room either.

(ATLAS)

(ATLAS)

26. Or travelling the world to collaborate.

27. Or meeting hundreds of people, each with their own story and background.

28. You never get to meet any interesting people.

(CERN)

(CERN)

29. And physicists have no sense of humour.

30. Honestly, who would want to be a physicist?

(CMS)

(CMS)

References:

  • http://www.atlas.ch/news/2008/first-beam-and-event.html
  • http://opendata.cern.ch/collection/ALICE-Learning-Resources
  • http://cds.cern.ch/record/1406060?ln=en
  • http://cds.cern.ch/record/1459634
  • http://cds.cern.ch/record/1459503?ln=en
  • http://cds.cern.ch/record/1474902/files/
  • https://cds.cern.ch/record/1643071/
  • http://cds.cern.ch/record/1436153?ln=en
  • http://home.web.cern.ch/about/computing
  • http://home.web.cern.ch/about/computing/grid-software-middleware-hardware
  • http://cerncourier.com/cws/article/cern/52744
  • http://lhcb.web.cern.ch/lhcb/fun/FunNewPage/album-crozet-jan2012/index.html
  • http://en.wikipedia.org/wiki/Solvay_Conference
  • http://home.web.cern.ch/about/updates/2014/05/cern-celebrates-its-anniversary-its-neighbours
  • https://atlas-service-enews.web.cern.ch/atlas-service-enews/2009/news_09/news_beam09.php
  • http://the-sieve.com/2012/07/06/higgsmania/
  • http://www.stfc.ac.uk/imagelibrary/displayImage.aspx?p=593
  • http://press.highenergyphysicsmedia.com/ichep-2012-cern-announcment.html
  • http://cds.cern.ch/record/1965972?ln=en
  • http://cds.cern.ch/record/1363014/

by Aidan Randle-Conde at March 02, 2015 03:16 PM

Peter Coles - In the Dark

Uncertainty, Risk and Probability

Last week I attended a very interesting event on the Sussex University campus, the Annual Marie Jahoda Lecture which was given this year by Prof. Helga Nowotny a distinguished social scientist. The title of the talk was A social scientist in the land of scientific promise and the abstract was as follows:

Promises are a means of bringing the future into the present. Nowhere is this insight by Hannah Arendt more applicable than in science. Research is a long and inherently uncertain process. The question is open which of the multiple possible, probable or preferred futures will be actualized. Yet, scientific promises, vague as they may be, constitute a crucial link in the relationship between science and society. They form the core of the metaphorical ‘contract’ in which support for science is stipulated in exchange for the benefits that science will bring to the well-being and wealth of society. At present, the trend is to formalize scientific promises through impact assessment and measurement. Against this background, I will present three case studies from the life sciences: assisted reproductive technologies, stem cell research and the pending promise of personalized medicine. I will explore the uncertainty of promises as well as the cunning of uncertainty at work.

It was a fascinating and wide-ranging lecture that touched on many themes. I won’t try to comment on all of them, but just pick up on a couple that struck me from my own perspective as a physicist. One was the increasing aversion to risk demonstrated by research funding agencies, such as the European Research Council which she helped set up but described in the lecture as “a clash between a culture of trust and a culture of control”. This will ring true to any scientist applying for grants even in “blue skies” disciplines such as astronomy: we tend to trust our peers, who have some control over funding decisions, but the machinery of control from above gets stronger every day. Milestones and deliverables are everything. Sometimes I think in order to get funding you have to be so confident of the outcomes of your research to that you have to have already done it, in which case funding isn’t even necessary. The importance of extremely speculative research is rarely recognized, although that is where there is the greatest potential for truly revolutionary breakthroughs.

Another theme that struck me was the role of uncertainty and risk. This grabbed my attention because I’ve actually written a book about uncertainty in the physical sciences. In her lecture, Prof. Nowotny referred to the definition (which was quite new to me) of these two terms by Frank Hyneman Knight in a book on economics called Risk, Uncertainty and Profit. The distinction made there is that “risk” is “randomness” with “knowable probabilities”, whereas “uncertainty” involves “randomness” with “unknowable probabilities”. I don’t like these definitions at all. For one thing they both involve a reference to “randomness”, a word which I don’t know how to define anyway; I’d be much happier to use “unpredictability”. Even more importantly, perhaps, I find the distinction between “knowable” and “unknowable” probabilities very problematic. One always knows something about a probability distribution, even if that something means that the distribution has to be very broad. And in any case these definitions imply that the probabilities concerned are “out there”, rather being statements about a state of knowledge (or lack thereof). Sometimes we know what we know and sometimes we don’t, but there are more than two possibilities. As the great American philosopher and social scientist Donald Rumsfeld (Shurely Shome Mishtake? Ed) put it:

“…as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns – the ones we don’t know we don’t know.”

There may be a proper Bayesian formulation of the distinction between “risk” and “uncertainty” that involves a transition between prior-dominated (uncertain) and posterior-dominated (risky), but basically I don’t see any qualititative difference between the two from such a perspective.

Anyway, it was a very interesting lecture that differed from many talks I’ve attended about the sociology of science in that the speaker clearly understood a lot about how science actually works. The Director of the Science Policy Research Unit invited the Heads of the Science Schools (including myself) to dinner with the speaker afterwards, and that led to the generation of many interesting ideas about how we (I mean scientists and social scientists) might work better together in the future, something we really need to do.


by telescoper at March 02, 2015 01:25 PM

Christian P. Robert - xi'an's og

market static

[Heard in the local market, while queuing for cheese:]

– You took too much!

– Maybe, but remember your sister is staying for two days.

– My sister…, as usual, she will take a big serving and leave half of it!

– Yes, but she will make sure to finish the bottle of wine!


Filed under: Kids, Travel Tagged: farmers' market, métro static

by xi'an at March 02, 2015 01:18 PM

Tommaso Dorigo - Scientificblogging

Neutrino Physics: Poster Excerpts from Neutel XVI
The XVI edition of "Neutrino Telescopes" is about to start in Venice today. In the meantime, I have started to publish in the conference blog a few excerpts of the posters that compete for the "best poster award" at the conference this week. You might be interested to check them out:

read more

by Tommaso Dorigo at March 02, 2015 12:11 PM

Peter Coles - In the Dark

Poll of Polls now updated to 27-02-15 – no real change from last week

telescoper:

Just thought I’d reblog this to show how close it seems the May 2015 General Election will be. The situation with respect to seats is even more complex. It looks like Labour will lose many of their seats in Scotland to the SNP, but the Conservatives will probably only lose a handful to UKIP.

It looks to me that another hung Parliament is on the cards, so coalitions of either Con+Lib+UKIP or Lab+SNP+Lib are distinct possibilities..

Originally posted on More Known Than Proven:

Poll of Polls - 270215I’ve updated my “Poll of Polls” to include 13 more polls that were carried out since I did my last graph. The graphs now include the Greens as I now have data for them too.

Overall this Poll of Polls shows no real change from last week.

If you want to download the spreadsheet that did this analysis go here. If you want to understand the methodology behind the “Poll of Polls” click here and scroll down to the bit that gives the description.

View original


by telescoper at March 02, 2015 10:22 AM

March 01, 2015

Christian P. Robert - xi'an's og

trans-dimensional nested sampling and a few planets

This morning, in the train to Dauphine (train that was even more delayed than usual!), I read a recent arXival of Brendon Brewer and Courtney Donovan. Entitled Fast Bayesian inference for exoplanet discovery in radial velocity data, the paper suggests to associate Matthew Stephens’ (2000)  birth-and-death MCMC approach with nested sampling to infer about the number N of exoplanets in an exoplanetary system. The paper is somewhat sparse in its description of the suggested approach, but states that the birth-date moves involves adding a planet with parameters simulated from the prior and removing a planet at random, both being accepted under a likelihood constraint associated with nested sampling. I actually wonder if this actually is the birth-date version of Peter Green’s (1995) RJMCMC rather than the continuous time birth-and-death process version of Matthew…

“The traditional approach to inferring N also contradicts fundamental ideas in Bayesian computation. Imagine we are trying to compute the posterior distribution for a parameter a in the presence of a nuisance parameter b. This is usually solved by exploring the joint posterior for a and b, and then only looking at the generated values of a. Nobody would suggest the wasteful alternative of using a discrete grid of possible a values and doing an entire Nested Sampling run for each, to get the marginal likelihood as a function of a.”

This criticism is receivable when there is a huge number of possible values of N, even though I see no fundamental contradiction with my ideas about Bayesian computation. However, it is more debatable when there are a few possible values for N, given that the exploration of the augmented space by a RJMCMC algorithm is often very inefficient, in particular when the proposed parameters are generated from the prior. The more when nested sampling is involved and simulations are run under the likelihood constraint! In the astronomy examples given in the paper, N never exceeds 15… Furthermore, by merging all N’s together, it is unclear how the evidences associated with the various values of N can be computed. At least, those are not reported in the paper.

The paper also omits to provide the likelihood function so I do not completely understand where “label switching” occurs therein. My first impression is that this is not a mixture model. However if the observed signal (from an exoplanetary system) is the sum of N signals corresponding to N planets, this makes more sense.


Filed under: Books, Statistics, Travel, University life Tagged: birth-and-death process, Chamonix, exoplanet, label switching, métro, nested sampling, Paris, RER B, reversible jump, Université Paris Dauphine

by xi'an at March 01, 2015 11:15 PM

John Baez - Azimuth

Visual Insight

I have another blog, called Visual Insight. Over here, our focus is on applying science to help save the planet. Over there, I try to make the beauty of pure mathematics visible to the naked eye.

I’m always looking for great images, so if you know about one, please tell me about it! If not, you may still enjoy taking a look.

Here are three of my favorite images from that blog, and a bit about the people who created them.

I suspect that these images, and many more on Visual Insight, are all just different glimpses of the same big structure. I have a rough idea what that structure is. Sometimes I dream of a computer program that would let you tour the whole thing. Unfortunately, a lot of it lives in more than 3 dimensions.

Less ambitiously, I sometimes dream of teaming up with lots of mathematicians and creating a gorgeous coffee-table book about this stuff.

 

Schmidt arrangement of the Eisenstein integers

 

Schmidt Arrangement of the Eisenstein Integers - Katherine Stange

This picture drawn by Katherine Stange shows what happens when we apply fractional linear transformations

z \mapsto \frac{a z + b}{c z + d}

to the real line sitting in the complex plane, where a,b,c,d are Eisenstein integers: that is, complex numbers of the form

m + n \sqrt{-3}

where m,n are integers. The result is a complicated set of circles and lines called the ‘Schmidt arrangement’ of the Eisenstein integers. For more details go here.

Katherine Stange did her Ph.D. with Joseph H. Silverman, an expert on elliptic curves at Brown University. Now she is an assistant professor at the University of Colorado, Boulder. She works on arithmetic geometry, elliptic curves, algebraic and integer sequences, cryptography, arithmetic dynamics, Apollonian circle packings, and game theory.

 

{7,3,3} honeycomb


This is the {7,3,3} honeycomb as drawn by Danny Calegari. The {7,3,3} honeycomb is built of regular heptagons in 3-dimensional hyperbolic space. It’s made of infinite sheets of regular heptagons in which 3 heptagons meet at vertex. 3 such sheets meet at each edge of each heptagon, explaining the second ‘3’ in the symbol {7,3,3}.

The 3-dimensional regions bounded by these sheets are unbounded: they go off to infinity. They show up as holes here. In this image, hyperbolic space has been compressed down to an open ball using the so-called Poincaré ball model. For more details, go here.

Danny Calegari did his Ph.D. work with Andrew Casson and William Thurston on foliations of three-dimensional manifolds. Now he’s a professor at the University of Chicago, and he works on these and related topics, especially geometric group theory.

 

{7,3,3} honeycomb meets the plane at infinity

This picture, by Roice Nelson, is another view of the {7,3,3} honeycomb. It shows the ‘boundary’ of this honeycomb—that is, the set of points on the surface of the Poincaré ball that are limits of points in the {7,3,3} honeycomb.

Roice Nelson used stereographic projection to draw part of the surface of the Poincaré ball as a plane. The circles here are holes, not contained in the boundary of the {7,3,3} honeycomb. There are infinitely many holes, and the actual boundary, the region left over, is a fractal with area zero. The white region on the outside of the picture is yet another hole. For more details, and a different version of this picture, go here.

Roice Nelson is a software developer for a flight data analysis company. There’s a good chance the data recorded on the airplane from your last flight moved through one of his systems! He enjoys motorcycling and recreational mathematics, he has a blog with lots of articles about geometry, and he makes plastic models of interesting geometrical objects using a 3d printer.



by John Baez at March 01, 2015 10:46 PM

Peter Coles - In the Dark

A Poem for St David’s Day

It’s St David’s Day today, so

Dydd Gŵyl Dewi Hapus!

As as become traditional on this blog I am going to mark the occasion by posting a poem the great Welsh poet, R.S. Thomas. This is called Welsh Testament.

All right, I was Welsh. Does it matter?
I spoke a tongue that was passed on
To me in the place I happened to be,
A place huddled between grey walls
Of cloud for at least half the year.
My word for heaven was not yours.
The word for hell had a sharp edge
Put on it by the hand of the wind
Honing, honing with a shrill sound
Day and night. Nothing that Glyn Dwr
Knew was armour against the rain’s
Missiles. What was descent from him?

Even God had a Welsh name:
He spoke to him in the old language;
He was to have a peculiar care
For the Welsh people. History showed us
He was too big to be nailed to the wall
Of a stone chapel, yet still we crammed him
Between the boards of a black book.

Yet men sought us despite this.
My high cheek-bones, my length of skull
Drew them as to a rare portrait
By a dead master. I saw them stare
From their long cars, as I passed knee-deep
In ewes and wethers. I saw them stand
By the thorn hedges, watching me string
The far flocks on a shrill whistle.
And always there was their eyes; strong
Pressure on me: You are Welsh, they said;
Speak to us so; keep your fields free
Of the smell of petrol, the loud roar
Of hot tractors; we must have peace
And quietness.

Is a museum
Peace? I asked. Am I the keeper
Of the heart’s relics, blowing the dust
In my own eyes? I am a man;
I never wanted the drab role
Life assigned me, an actor playing
To the past’s audience upon a stage
Of earth and stone; the absurd label
Of birth, of race hanging askew
About my shoulders. I was in prison
Until you came; your voice was a key
Turning in the enormous lock
Of hopelessness. Did the door open
To let me out or yourselves in?


by telescoper at March 01, 2015 12:48 PM

Jester - Resonaances

Weekend Plot: Bs mixing phase update
Today's featured plot was released last week by the LHCb collaboration:

It shows the CP violating phase in Bs meson mixing, denoted as φs,  versus the difference of the decay widths between the two Bs meson eigenstates. The interest in φs comes from the fact that it's  one of the precious observables that 1) is allowed by the symmetries of the Standard Model, 2) is severely suppressed due to the CKM structure of flavor violation in the Standard Model. Such observables are a great place to look for new physics (other observables in this family include Bs/Bd→μμ, K→πνν, ...). New particles, even too heavy to be produced directly at the LHC, could produce measurable contributions to φs as long as they don't respect the Standard Model flavor structure. For example, a new force carrier with a mass as large as 100-1000 TeV and order 1 flavor- and CP-violating coupling to b and s quarks would be visible given the current experimental precision. Similarly, loops of supersymmetric particles with 10 TeV masses could show up, again if the flavor structure in the superpartner sector is not aligned with that in the  Standard Model.

The phase φs can be measured in certain decays of neutral Bs mesons where the process involves an interference of direct decays and decays through oscillation into the anti-Bs meson. Several years ago measurements at Tevatron's D0 and CDF experiments suggested a large new physics contribution. The mild excess has gone away since, like many other such hints.  The latest value quoted by LHCb is φs = - 0.010 ± 0.040, which combines earlier measurements of the Bs → J/ψ π+ π- and  Bs → Ds+ Ds- decays with  the brand new measurement of the Bs → J/ψ K+ K- decay. The experimental precision is already comparable to the Standard Model prediction of φs = - 0.036. Further progress is still possible, as the Standard Model prediction can be computed to a few percent accuracy.  But the room for new physics here is getting tighter and tighter.

by Jester (noreply@blogger.com) at March 01, 2015 11:23 AM

February 28, 2015

Christian P. Robert - xi'an's og

ice-climbing Niagara Falls

I had missed these news that a frozen portion of the Niagara Falls had been ice-climbed. By Will Gadd on Jan. 27. This is obviously quite impressive given the weird and dangerous nature of the ice there, which is mostly frozen foam from the nearby waterfall. (I once climbed an easy route on such ice at the Chutes Montmorency, near Québec City, and it felt quite strange…) He even had a special ice hook designed for that climb as he did not trust the usual ice screws. Will Gadd has however climbed much more difficult routes like Helmcken Falls in British Columbia, which may be the hardest mixed route in the World!


Filed under: Mountains, pictures Tagged: British Columbia, Canada, Helmcken Falls, ice climbing, Niagara Falls, Niagara-on-the-Lake, USA

by xi'an at February 28, 2015 11:15 PM

Peter Coles - In the Dark

That Big Black Hole Story

There’s been a lot of news coverage this week about a very big black hole, so I thought I’d post a little bit of background.  The paper describing the discovery of the object concerned appeared in Nature this week, but basically it’s a quasar at a redshift z=6.30. That’s not the record for such an object. Not long ago I posted an item about the discovery of a quasar at redshift 7.085, for example. But what’s interesting about this beastie is that it’s a very big beastie, with a central black hole estimated to have a mass of around 12 billion times the mass of the Sun, which is a factor of ten or more larger than other objects found at high redshift.

Anyway, I thought perhaps it might be useful to explain a little bit about what difficulties this observation might pose for the standard “Big Bang” cosmological model. Our general understanding of galaxies form is that gravity gathers cold non-baryonic matter into clumps  into which “ordinary” baryonic material subsequently falls, eventually forming a luminous galaxy forms surrounded by a “halo” of (invisible) dark matter.  Quasars are galaxies in which enough baryonic matter has collected in the centre of the halo to build a supermassive black hole, which powers a short-lived phase of extremely high luminosity.

The key idea behind this picture is that the haloes form by hierarchical clustering: the first to form are small but  merge rapidly  into objects of increasing mass as time goes on. We have a fairly well-established theory of what happens with these haloes – called the Press-Schechter formalism – which allows us to calculate the number-density N(M,z) of objects of a given mass M as a function of redshift z. As an aside, it’s interesting to remark that the paper largely responsible for establishing the efficacy of this theory was written by George Efstathiou and Martin Rees in 1988, on the topic of high redshift quasars.

Anyway, this is how the mass function of haloes is predicted to evolve in the standard cosmological model; the different lines show the distribution as a function of redshift for redshifts from 0 (red) to 9 (violet):

Note   that the typical size of a halo increases with decreasing redshift, but it’s only at really high masses where you see a really dramatic effect. The plot is logarithmic, so the number density large mass haloes falls off by several orders of magnitude over the range of redshifts shown. The mass of the black hole responsible for the recently-detected high-redshift quasar is estimated to be about 1.2 \times 10^{10} M_{\odot}. But how does that relate to the mass of the halo within which it resides? Clearly the dark matter halo has to be more massive than the baryonic material it collects, and therefore more massive than the central black hole, but by how much?

This question is very difficult to answer, as it depends on how luminous the quasar is, how long it lives, what fraction of the baryons in the halo fall into the centre, what efficiency is involved in generating the quasar luminosity, etc.   Efstathiou and Rees argued that to power a quasar with luminosity of order 10^{13} L_{\odot} for a time order 10^{8} years requires a parent halo of mass about 2\times 10^{11} M_{\odot}.  Generally, i’s a reasonable back-of-an-envelope estimate that the halo mass would be about a hundred times larger than that of the central black hole so the halo housing this one could be around 10^{12} M_{\odot}.

You can see from the abundance of such haloes is down by quite a factor at redshift 7 compared to redshift 0 (the present epoch), but the fall-off is even more precipitous for haloes of larger mass than this. We really need to know how abundant such objects are before drawing definitive conclusions, and one object isn’t enough to put a reliable estimate on the general abundance, but with the discovery of this object  it’s certainly getting interesting. Haloes the size of a galaxy cluster, i.e.  10^{14} M_{\odot}, are rarer by many orders of magnitude at redshift 7 than at redshift 0 so if anyone ever finds one at this redshift that would really be a shock to many a cosmologist’s  system, as would be the discovery of quasars with such a high mass  at  redshifts significantly higher than seven.

Another thing worth mentioning is that, although there might be a sufficient number of potential haloes to serve as hosts for a quasar, there remains the difficult issue of understanding precisely how the black hole forms and especially how long it takes to do so. This aspect of the process of quasar formation is much more complicated than the halo distribution, so it’s probably on detailed models of  black-hole  growth that this discovery will have the greatest impact in the short term.


by telescoper at February 28, 2015 01:48 PM

Tommaso Dorigo - Scientificblogging

Miscellanea
This week I was traveling in Belgium so my blogging activities have been scarce. Back home, I will resume with serious articles soon (with the XVI Neutrino Telescopes conference next week, there will be a lot to report on!). In the meantime, here's a list of short news you might care about as an observer of progress in particle physics research and related topics.

read more

by Tommaso Dorigo at February 28, 2015 11:07 AM

Geraint Lewis - Cosmic Horizons

Shooting relativistic fish in a rational barrel
I need to take a breather from grant writing, which is consuming almost every waking hour in between all of the other things that I still need to do. So see this post as a cathartic exercise.

What makes a scientist? Is it the qualification? What you do day-to-day? The association and societies to which you belong? I think a unique definition may be impossible as there is a continuum of properties of scientists. This makes it a little tricky for the lay-person to identify "real science" from "fringe science" (but, in all honesty, the distinction between these two is often not particularly clear cut).

One thing that science (and many other fields) do is have meetings, conferences and workshops to discuss their latest results. Some people seem to spend their lives flitting between exotic locations essentially presenting the same talk to almost the same audience, but all scientists probably attend a conference or two per year.

In one of my own fields, namely cosmology, there are lots of conferences per year. But accompanying these there are another set of conferences going on, also on cosmology and often including discussions of gravity, particle physics, and the power of electricity in the Universe. At these meetings, the words "rational" and "logical" are bandied about, and it is clear that the people attending think that the great mass of astronomer and physicists have gotten it all wrong, are deluded, are colluding to keep the truth from the public for some bizarre agenda - some sort of worship of Einstein and "mathemagics" (I snorted with laughter when I heard this).

If I am being paid to lie to the public, I would like to point out that my cheque has not arrived and unless it does shortly I will go to the papers with a "tell all"!!

These are not a new phenomenon, but were often in shadows. But now, of course, with the internet, any one can see these conference in action with lots of youtube clips and lectures.

Is there any use for such videos? I think so, as, for the student of physics, they present an excellent place to tests one knowledge by identifying just where the presenters are straying off the path.

A brief search of youtube will turn up talks that point out that black holes cannot exist because
is the starting point for the derivation of the Schwarzschild solution.

Now, if you are not really familiar with the mathematics of relativity, this might look quite convincing. The key point is this equation

Roughly speaking, this says that space-time geometry (left-hand side) is related to the matter and energy density (right-hand side, and you calculate the Schwarzschild geometry for a black hole by setting the right-hand side equal to zero.

Now, with the right-hand side equal to zero that means there is no energy and mass, and the conclusion in the video says that there is no source, no thing to produce the bending of space-time and hence the effects of gravity. So, have the physicists been pulling the wool over everyones eyes for almost 100 years?

Now, a university level student may not have done relativity yet, but it should be simple to see the flaw in this argument. And, to do this, we can use the wonderful world of classical mechanics.

In classical physics, where gravity is a force and we deal with potentials, we have a similar equation to the relativistic equation above. It's known as Poisson's equation
The left-hand side is related to derivatives of the gravitational potential, whereas the right-hand side is some constants (including Newton's gravitational constant (G)) and the density given by the rho.

I think everyone is happy with this equation. Now, one thing you calculate early on in gravitational physics is that the gravitational potential outside of a massive spherical object is given by
Note that we are talking about the potential is outside of the spherical body (the simple V and Phi are meant to be the same thing). So, if we plug this potential into Poisson's equation, does it give us a mass distribution which is spherical?

Now, Poisson's equation can look a little intimidating, but let's recast the potential in Cartesian coordinates. Then it looks like this

Ugh! Does that make it any easier? Yes, let's just simply plug it into Wolfram Alpha to do the hard work. So, the derivatives have an x-part, y-part and z-part - here's the x-part.
Again, is you are a mathphobe, this is not much better, but let's add the y- and z-parts.

After all that, the result is zero! Zilch! Nothing! This must mean that Poisson's equation for this potential is
So, the density is equal to zero. Where's the mass that produces the gravitational field? This is the same as the apparent problem with relativity. What Poisson's equation tells us that the derivatives o the potential AT A POINT is related to the density AT THAT POINT! 

Now, remember these are derivatives, and so the potential can have a whole bunch of shapes at that point, as long as the derivatives still hold. One of these, of course, is there being no mass there and so no gravitational potential at all, but any vacuum, with no mass, will above Poisson = 0 equation, including the potential outside of any body (the one used in this example relied on a spherical source).

So, the relativistic version is that the properties of the space-time curvature AT A POINT is related to the mass and energy AT A POINT. A flat space-time is produced when there is no mass and energy, and so has G=0, but so does any point in a vacuum, but that does not mean that the space-time at that point is not curved (and so no gravity).

Anyway, I got that off my chest, and my Discovery Project submitted, but now it's time to get on with a LIEF application! 

by Cusp (noreply@blogger.com) at February 28, 2015 03:30 AM

Emily Lakdawalla - The Planetary Society Blog

Highlights from our reddit Space Policy AMA
The space policy and advocacy team at The Planetary Society held an AMA (ask me anything) on reddit, here are some of the highlights.

February 28, 2015 12:20 AM

February 27, 2015

arXiv blog

The Emerging Challenge of Augmenting Virtual Worlds With Physical Reality

If you want to interact with real world objects while immersed in a virtual reality, how do you do it?


Augmented reality provides a live view of the real world with computer generated elements superimposed. Pilots have long used head-up displays to access air speed data and other parameters while they fly. Some smartphone cameras can superimpose computer-generated characters on to the view of the real world. And emerging technologies such as Google Glass aim to superimpose useful information on to a real world view, such as navigation directions and personal data.

February 27, 2015 08:56 PM

Emily Lakdawalla - The Planetary Society Blog

Pluto Science, on the Surface
New Horizons' Principal Investigator Alan Stern gives an update on the mission's progress toward Pluto.

February 27, 2015 08:39 PM

The n-Category Cafe

Concepts of Sameness (Part 4)

This time I’d like to think about three different approaches to ‘defining equality’, or more generally, introducing equality in formal systems of mathematics.

These will be taken from old-fashioned logic — before computer science, category theory or homotopy theory started exerting their influence. Eventually I want to compare these to more modern treatments.

If you know other interesting ‘old-fashioned’ approaches to equality, please tell me!

The equals sign is surprisingly new. It was never used by the ancient Babylonians, Egyptians or Greeks. It seems to originate in 1557, in Robert Recorde’s book The Whetstone of Witte. If so, we actually know what the first equation looked like:

As you can see, the equals sign was much longer back then! He used parallel lines “because no two things can be more equal.”

Formalizing the concept of equality has raised many questions. Bertrand Russell published The Principles of Mathematics [R] in 1903. Not to be confused with the Principia Mathematica, this is where he introduced Russell’s paradox. In it, he wrote:

identity, an objector may urge, cannot be anything at all: two terms plainly are not identical, and one term cannot be, for what is it identical with?

In his Tractatus, Wittgenstein [W] voiced a similar concern:

Roughly speaking: to say of two things that they are identical is nonsense, and to say of one thing that it is identical with itself is to say nothing.

These may seem like silly objections, since equations obviously do something useful. The question is: precisely what?

Instead of tackling that head-on, I’ll start by recalling three related approaches to equality in the pre-categorical mathematical literature.

The indiscernibility of identicals

The principle of indiscernibility of identicals says that equal things have the same properties. We can formulate it as an axiom in second-order logic, where we’re allowed to quantify over predicates <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics>:

<semantics>xy[x=yP[P(x)P(y)]]<annotation encoding="application/x-tex"> \forall x \forall y [x = y \; \implies \; \forall P \, [P(x) \; \iff \; P(y)] ] </annotation></semantics>

We can also formulate it as an axiom schema in 1st-order logic, where it’s sometimes called substitution for formulas. This is sometimes written as follows:

For any variables <semantics>x,y<annotation encoding="application/x-tex">x, y</annotation></semantics> and any formula <semantics>ϕ<annotation encoding="application/x-tex">\phi</annotation></semantics>, if <semantics>ϕ<annotation encoding="application/x-tex">\phi'</annotation></semantics> is obtained by replacing any number of free occurrences of <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> in <semantics>ϕ<annotation encoding="application/x-tex">\phi</annotation></semantics> with <semantics>y<annotation encoding="application/x-tex">y</annotation></semantics>, such that these remain free occurrences of <semantics>y<annotation encoding="application/x-tex">y</annotation></semantics>, then

<semantics>x=y[ϕϕ]<annotation encoding="application/x-tex"> x = y \;\implies\; [\phi \;\implies\; \phi' ] </annotation></semantics>

I think we can replace this with the prettier

<semantics>x=y[ϕϕ]<annotation encoding="application/x-tex"> x = y \;\implies\; [\phi \;\iff \; \phi'] </annotation></semantics>

without changing the strength of the schema. Right?

We cannot derive reflexivity, symmetry and transitivity of equality from the indiscernibility of identicals. So, this principle does not capture all our usual ideas about equality. However, as shown last time, we can derive symmetry and transitivity from this principle together with reflexivity. This uses an interesting form of argument where take “being equal to <semantics>z<annotation encoding="application/x-tex">z</annotation></semantics>” as one of the predicates (or formulas) to which we apply the principle. There’s something curiously self-referential about this. It’s not illegitimate, but it’s curious.

The identity of indiscernibles

Leibniz [L] is often credited with formulating a converse principle, the identity of indiscernibles. This says that things with all the same properties are equal. Again we can write it as a second-order axiom:

<semantics>xy[P[P(x)P(y)]x=y]<annotation encoding="application/x-tex"> \forall x \forall y [ \forall P [ P(x) \; \iff \; P(y)] \; \implies \; x = y ] </annotation></semantics>

or a first-order axiom schema.

We can go further if we take the indiscernibility of identicals and identity of indiscernibles together as a package:

<semantics>xy[P[P(x)P(y)]x=y]<annotation encoding="application/x-tex"> \forall x \forall y [ \forall P [ P(x) \; \iff \; P(y)] \; \iff \; x = y ] </annotation></semantics>

This is often called the Leibniz law. It says an entity is determined by the collection of predicates that hold of that entity. Entities don’t have mysterious ‘essences’ that determine their individuality: they are completely known by their properties, so if two entities have all the same properties they must be the same.

This principle does imply reflexivity, symmetry and transitivity of equality. They follow from the corresponding properties of <semantics><annotation encoding="application/x-tex">\iff</annotation></semantics> in a satisfying way. Of course, if we were wondering why equality has these three properties, we are now led to wonder the same thing about the biconditional <semantics><annotation encoding="application/x-tex">\iff</annotation></semantics>. But this counts as progress: it’s a step toward ‘logicizing’ mathematics, or at least connecting <semantics>=<annotation encoding="application/x-tex">=</annotation></semantics> firmly to <semantics><annotation encoding="application/x-tex">\iff</annotation></semantics>.

Apparently Russell and Whitehead used a second-order version of the Leibniz law to define equality in the Principia Mathematica [RW], while Kalish and Montague [KL] present it as a first-order schema. I don’t know the whole history of such attempts.

When you actually look to see where Leibniz formulated this principle, it’s a bit surprising. He formulated it in the contrapositive form, he described it as a ‘paradox’, and most surprisingly, it’s embedded as a brief remark in a passage that would be hair-curling for many contemporary rationalists. It’s in his Discourse on Metaphysics, a treatise written in 1686:

Thus Alexander the Great’s kinghood is an abstraction from the subject, and so is not determinate enough to pick out an individual, and doesn’t involve the other qualities of Alexander or everything that the notion of that prince includes; whereas God, who sees the individual notion or ‘thisness’ of Alexander, sees in it at the same time the basis and the reason for all the predicates that can truly be said to belong to him, such as for example that he would conquer Darius and Porus, even to the extent of knowing a priori (and not by experience) whether he died a natural death or by poison — which we can know only from history. Furthermore, if we bear in mind the interconnectedness of things, we can say that Alexander’s soul contains for all time traces of everything that did and signs of everything that will happen to him — and even marks of everything that happens in the universe, although it is only God who can recognise them all.

Several considerable paradoxes follow from this, amongst others that it is never true that two substances are entirely alike, differing only in being two rather than one. It also follows that a substance cannot begin except by creation, nor come to an end except by annihilation; and because one substance can’t be destroyed by being split up, or brought into existence by the assembling of parts, in the natural course of events the number of substances remains the same, although substances are often transformed. Moreover, each substance is like a whole world, and like a mirror of God, or indeed of the whole universe, which each substance expresses in its own fashion — rather as the same town looks different according to the position from which it is viewed. In a way, then, the universe is multiplied as many times as there are substances, and in the same way the glory of God is magnified by so many quite different representations of his work.

(Emphasis mine — you have to look closely to find the principle of identity of indiscernibles, because it goes by so quickly!)

There have been a number of objections to the Leibniz law over the years. I want to mention one that might best be handled using some category theory. In 1952, Max Black [B] claimed that in a symmetrical universe with empty space containing only two symmetrical spheres of the same size, the two spheres are two distinct objects even though they have all their properties in common.

As Black admits, this problem only shows up in a ‘relational’ theory of geometry, where we can’t say that the spheres have different positions — e.g., one centered at the points <semantics>(x,y,z)<annotation encoding="application/x-tex">(x,y,z)</annotation></semantics>, the other centered at <semantics>(x,y,z)<annotation encoding="application/x-tex">(-x,-y,-z)</annotation></semantics> — but only speak of their position relative to one another. This sort of theory is certainly possible, and it seems to be important in physics. But I believe it can be adequately formulated only with the help of some category theory. In the situation described by Black, I think we should say the spheres are not equal but isomorphic.

As widely noted, general relativity also pushes for a relational approach to geometry. Gauge theory, also, raises the issue of whether indistinguishable physical situations should be treated as equal or merely isomorphic. I believe the mathematics points us strongly in the latter direction.

A related issue shows up in quantum mechanics, where electrons are considered indistinguishable (in a certain sense), yet there can be a number of electrons in a box — not just one.

But I will discuss such issues later.

Extensionality

In traditional set theory we try to use sets as a substitute for predicates, saying <semantics>xS<annotation encoding="application/x-tex">x \in S</annotation></semantics> as a substitute for <semantics>P(x)<annotation encoding="application/x-tex">P(x)</annotation></semantics>. This lets us keep our logic first-order and quantify over sets — often in a universe where everything is a set — as a substitute for quantifying over predicates. Of course there’s a glitch: Russell’s paradox shows we get in trouble if we try to treat every predicate as defining a set! Nonetheless it is a powerful strategy.

If we apply this strategy to reformulate the Leibniz law in a universe where everything is a set, we obtain:

<semantics>ST[S=TR[SRTR]]<annotation encoding="application/x-tex"> \forall S \forall T [ S = T \; \iff \; \forall R [ S \in R \; \iff \; T \in R]] </annotation></semantics>

While this is true in Zermelo-Fraenkel set theory, it is not taken as an axiom. Instead, people turn the idea around and use the axiom of extensionality:

<semantics>ST[S=TR[RSRT]]<annotation encoding="application/x-tex"> \forall S \forall T [ S = T \; \iff \; \forall R [ R \in S \; \iff \; R \in T]] </annotation></semantics>

Instead of saying two sets are equal if they’re in all the same sets, this says two sets are equal if all the same sets are in them. This leads to a view where the ‘contents’ of an entity as its defining feature, rather than the predicates that hold of it.

We could, in fact, send this idea back to second-order logic and say that predicates are equal if and only if they hold for the same entities:

<semantics>PQ[x[P(x)Q(x)]P=Q]<annotation encoding="application/x-tex"> \forall P \forall Q [\forall x [P(x) \; \iff \; Q(x)] \; \iff P = Q ] </annotation></semantics>

as a kind of ‘dual’ of the Leibniz law:

<semantics>xy[P[P(x)P(y)]x=y]<annotation encoding="application/x-tex"> \forall x \forall y [ \forall P [ P(x) \; \iff \; P(y)] \; \iff \; x = y ] </annotation></semantics>

I don’t know if this has been remarked on in the foundational literature, but it’s a close relative of a phenomenon that occurs in other forms of duality. For example, continuous real-valued functions <semantics>F,G<annotation encoding="application/x-tex">F, G</annotation></semantics> on a topological space obey

<semantics>FG[x[F(x)=G(x)]F=G]<annotation encoding="application/x-tex"> \forall F \forall G [\forall x [F(x) \; = \; G(x)] \; \iff F = G ] </annotation></semantics>

but if the space is nice enough, continuous functions ‘separate points’, which means we also have

<semantics>xy[F[F(x)=F(y)]x=y]<annotation encoding="application/x-tex"> \forall x \forall y [ \forall F [ F(x) \; = \; F(y)] \; \iff \; x = y ] </annotation></semantics>

Notes

by john (baez@math.ucr.edu) at February 27, 2015 04:26 PM

ZapperZ - Physics and Physicists

Much Ado About Dress Color
Have you been following this ridiculous debate about the color of this dress? People are going nuts all over different social media about what the color of this dress is based on the photo that has exploded all over the internet.

I'm calling it ridiculous because people are actually arguing with each other, disagreeing about what they see, and then found it rather odd that other people do not see the same thing as they do, as if this is highly unusual and unexpected. Does the fact that different people see colors differently not a well-known fact? Seriously?

I've already mentioned about the limition of the human eye, and why it is really not a very good light detector in many aspects. So already using your eyes to determine the color of this dress is already suspect. Not only that, but due to such uncertainty, one should be to stuborn about what one sees, as if what you are seeing must be the ONLY way to see it.

But how would science solve this? Easy. Devices such as a UV-VIS can easily be used to measure the spectrum of reflected light, and the intensity of those spectral peaks. It tells you unambiguously the wavelengths that are reflected off the source, and how much of it is reflected. So to solve this debate, cut pieces of the dress (corresponding to all the different colors on it), and stick it into one of these devices. Voila! You have killed the debate of the "color".

This is something that can be determined objectively, without any subjective opinion of "color", and without the use of a poor light detector such as one's eyes. So, if someone can tell me where I can get a piece of this fabric, I'll test it out!

Zz.

by ZapperZ (noreply@blogger.com) at February 27, 2015 04:15 PM

CERN Bulletin

CERN Bulletin

Qminder pictures-FR2
Qminder, application of the Registration Service

by Journalist, Student at February 27, 2015 10:13 AM

CERN Bulletin

CERN Bulletin

Klaus Winter (1930 - 2015)

We learned with great sadness that Klaus Winter passed away on 9 February 2015, after a long illness.

 

Klaus was born in 1930 in Hamburg, where he obtained his diploma in physics in 1955. From 1955 to 1958 he held a scholarship at the Collège de France, where he received his doctorate in nuclear physics under the guidance of Francis Perrin. Klaus joined CERN in 1958, where he first participated in experiments on π+ and K0 decay properties at the PS, and later became the spokesperson of the CHOV Collaboration at the ISR.

Starting in 1976, his work focused on experiments with the SPS neutrino beam. In 1984 he joined Ugo Amaldi to head the CHARM experiment, designed for detailed studies of the neutral current interactions of high-energy neutrinos, which had been discovered in 1973 using the Gargamelle bubble chamber at the PS. The unique feature of the detector was its target calorimeter, which used large Carrara marble plates as an absorber material.

From 1984 to 1991, Klaus headed up the CHARM II Collaboration. The huge detector, which weighed 700 tonnes and was principally a sandwich structure of large glass plates and planes of streamer tubes, was primarily designed to study high-energy neutrino-electron scattering through neutral currents.

In recognition of the fundamental results obtained by these experiments, Klaus was awarded the Stern-Gerlach Medal in 1993, the highest distinction of the German Physical Society for exceptional achievements in experimental physics. In 1997, he was awarded the prestigious Bruno Pontecorvo Prize for his major contributions to neutrino physics by the Joint Institute for Nuclear Research in Dubna.

The last experiment under his leadership, from 1991 until his retirement, was CHORUS, which used a hybrid emulsion-electronic detector primarily designed to search for νμ− ντ oscillations in the then-favoured region of large mass differences and small mixing angle.

Among other responsibilities, Klaus served for many years as editor of Physics Letters B and on the Advisory Committee of the International Conference on Neutrino Physics and Astrophysics. He was also the editor of two renowned books, Neutrino Physics (1991 and 2000) and Neutrino Mass with Guido Altarelli (2003).

An exceptional researcher, he also lectured physics at the University of Hamburg and – after the reunification of Germany – at the Humboldt University of Berlin, supervising 25 PhD theses and seven Habilitationen.

Klaus was an outstanding and successful leader, dedicated to his work, which he pursued with vision and determination. His intellectual horizons were by no means limited to science, extending far into culture and the arts, notably modern painting.

We have lost an exceptional colleague and friend.
 

His friends and colleagues from CHARM, CHARM II and CHORUS

February 27, 2015 10:02 AM

Georg von Hippel - Life on the lattice

Back from Mumbai
On Saturday, my last day in Mumbai, a group of colleagues rented a car with a driver to take a trip to Sanjay Gandhi National Park and visit the Kanheri caves, a Buddhist site consisting of a large number of rather simple monastic cells and some worship and assembly halls with ornate reliefs and inscriptions, all carved out out of solid rock (some of the cell entrances seem to have been restored using steel-reinforced concrete, though).

On the way back, we stopped at Mani Bhavan, where Mahatma Gandhi lived from 1917 to 1934, and which is now a museum dedicated to his live and legacy.

In the night, I flew back to Frankfurt, where the temperature was much lower than in Mumbai; in fact, on Monday there was snow.

by Georg v. Hippel (noreply@blogger.com) at February 27, 2015 10:01 AM

Lubos Motl - string vacua and pheno

Nature is subtle
Caltech has created their new Walter Burke Institute for Theoretical Physics. It's named after Walter Burke – but it is neither the actor nor the purser nor the hurler, it's Walter Burke the trustee so no one seems to give a damn about him.



Walter Burke, the actor

That's why John Preskill's speech [URL fixed, tx] focused on a different topic, namely his three principles of creating the environment for good physics.




His principles are, using my words,
  1. the best way to learn is to teach
  2. two-trick ponies (people working at the collision point of two disciplines) are great
  3. Nature is subtle
Let me say a few words about these principles.




Teaching as a way of learning

First, for many of us, teaching is indeed a great way to learn. If you are passionate about teaching, you are passionate about making things as clear to the "student" that he or she just can't object. But to achieve this clarity, you must clarify all the potentially murky points that you may be willing to overlook if the goal were just for you to learn the truth.

You "know" what the truth is, perhaps because you have a good intuition or you have solved similar or very closely related things in the past, and it's therefore tempting – and often useful, if you want to save your time – not to get distracted by every doubt. But a curious, critical student will get distracted and he or she will interrupt you and ask the inconvenient questions.

If you are a competent teacher, you must be able to answer pretty much all questions related to what you are saying, and by getting ready to this deep questioning, you learn the topic really properly.

I guess that John Preskill would agree that I am interpreting his logic in different words and I am probably thinking about these matters similarly to himself. Many famous physicists have agreed. For example, Richard Feynman has said that it was important for him to be hired as a teacher because if the research isn't moving forward, and it often isn't, he still knows that he is doing something useful.

But I still think it's fair to say that many great researchers don't think in this way – and many great researchers aren't even good teachers. Bell Labs have employed numerous great non-teacher researchers. And on the contrary, many good teachers are not able to become great researchers. For those reasons, I think that Preskill's implicit point about the link between teaching and finding new results isn't true in general.

Two-trick ponies

Preskill praises the concept of two-trick ponies – people who learn (at least) two disciplines and benefit from the interplay between them. He is an example of a two-trick pony. And it's great if it works.

On the other hand, I still think that a clear majority of the important results occurs within one discipline. And most combinations of disciplines end up being low-quality science. People often market themselves as interdisciplinary researchers because they're not too good in either discipline – and whenever their deficit in one discipline is unmasked, they may suggest that they're better in another one. Except that it often fails to be the case in all disciplines.

So the interdisciplinary research is often just a euphemism for bad research hiding its low quality. Moreover, even if one doesn't talk about imperfect people at all, I think that random pairs of disciplines (or subdisciplines) of science (or physics) are unlikely to lead to fantastic off-spring, at least not after a limited effort.

Combinations of two disciplines have led and will probably lead to several important breakthroughs – but they are very rare.

There is another point related to the two-trick ponies. Many breakthroughs in physics resulted from the solution to a paradox. The apparent paradox arose from two different perspectives on a problem. These perspectives may usually be associated with two subdisciplines of physics.

Einstein's special relativity is the resolution of disagreements between classical mechanics and classical field theory (electromagnetism) concerning the question how objects behave when you approach the speed of light. String theory is the reconciliation of the laws of the small (quantum field theory) and the laws of the large (general relativity), and there are other examples.

Even though the two perspectives that are being reconciled correspond to different parts of the physics research and the physics community, they are often rather close sociologically. So theoretical physicists tend to know both. The very question whether two classes of questions in physics should be classified as "one pony" or "two monies" (or "more than two ponies") is a matter of conventions. After all, there is just one science and the precise separation of science into disciplines is a human invention.

This ambiguous status of the term "two-trick pony" seriously weakens John Preskill's second principle. When we say that someone is a "two-trick pony", we may only define this proposition relatively to others. A "two-trick pony" is more versatile than others – he knows stuff from subdisciplines that are further from each other than the subdisciplines mastered by other typical ponies.

But versatility isn't really the general key to progress, either. Focus and concentration may often be more important. So I don't really believe that John Preskill's second principle may be reformulated as a general rule with the universal validity.

Nature is subtle

However, I fully agree with Preskill's third principle that says that Nature is subtle. Subtle is Nature but malicious She is not. ;-) Preskill quotes the holographic principle in quantum gravity as our best example of Nature's subtle character. That's a great (but not the greatest) choice of an example, I think. Preskill adds a few more words explaining what he means by the adjective "subtle":
Yes, mathematics is unreasonably effective. Yes, we can succeed at formulating laws of Nature with amazing explanatory power. But it’s a struggle. Nature does not give up her secrets so readily. Things are often different than they seem on the surface, and we’re easily fooled. Nature is subtle.
Nature isn't a prostitute. She is hiding many of Her secrets. That's why the self-confidence of a man who declares himself to be the naturally born expert in Nature's intimate organs may often be unjustified and foolish. The appearances are often misleading. The men often confuse the pubic hair with the swimming suit, the \(\bra{\rm bras}\) with the \(\ket{\rm cats}\) beneath the \(\bra{\rm bras}\), and so on. We aren't born with the accurate knowledge of the most important principles of Nature.

We have to learn them by carefully studying Nature and we should always understand that any partial insight we make may be an illusion. To say the least, every generalization or extrapolation of an insight may turn out to be wrong.

And it may be wrong not just in the way we can easily imagine – a type of wrongness of our theories that we're ready to expect from the beginning. Our provisional theories may be wrong for much more profound reasons.

Of course, I consider the postulates of quantum mechanics to be the most important example of Nature's subtle character. A century ago, physicists were ready to generalize the state-of-the-art laws of classical physics in many "understandable" ways: to add new particles, new classical fields, new terms in the equations that govern them, higher derivatives, and so on. And Lord Kelvin thought that even those "relatively modest" steps had already been completed, so all that remained was to measure the parameters of Nature more accurately than before.

But quantum mechanics forced us to change the whole paradigm. Even though the class of classical (and usually deterministic) theories seemed rather large and tolerant, quantum mechanics showed that it's an extremely special, \(\hbar=0\) limit of more general theories of Nature (quantum mechanical theories) that we must use instead of the classical ones. The objective reality doesn't exist at the fundamental level.

(The \(\hbar=0\) classical theories may look like a "measure zero" subset of the quantum+classical ones with a general value of \(\hbar\). But because \(\hbar\) is dimensionful in usual units and its numerical value may therefore be changed by any positive factor by switching to different units, we may only qualitatively distinguish \(\hbar=0\) and \(\hbar\neq 0\). That means that the classical and quantum theories are pretty much "two comparably large classes" of theories. The classical theories are a "contraction" or a "limit" of the quantum ones; some of the quantum ones are "deformations" of the classical ones. Because of these relationships, it was possible for the physicists to think that the world obeys classical laws although for 90 years, we have known very clearly that it only obeys the quantum laws.)

Quantum mechanics demonstrated that people were way too restrictive when it came to the freedoms they were "generously" willing to grant to Nature. Nature just found the straitjacket to be unacceptably suffocating. It simply doesn't work like that. Quantum mechanics is the most important example of a previously unexpected difficulty but there are many other examples.

At the end, the exact theory of Nature – and our best approximations of the exact theory we may explain these days – are consistent. But the very consistency may sometimes look surprising to a person who doesn't have a sufficient background in mathematics, who hasn't studied the topic enough, or who is simply not sufficiently open-minded or honest.

The lay people – and some of the self-styled (or highly paid!) physicists as well – often incorrectly assume that the right theory must belong to a class of theories (classical theories, those with the objective reality of some kind, were my most important example) they believe is sufficiently broad and surely containing all viable contenders. They believe that all candidates not belonging to this class are crazy or inconsistent. They violate common sense, don't they?

But this instinctive expectation is often wrong. In reality, they have some evidence that their constraint on the theory is a sufficient condition for a theory to be consistent. But they often incorrectly claim that their restriction is actually a necessary condition for the consistency, even though it is not. In most cases, when this error takes place, the condition they were willing to assume is not only unnecessary; it is actually demonstrably wrong when some other, more important evidence or arguments are taken into account.

A physicist simply cannot ignore the possibility that assumptions are wrong, even if he used to consider these assumptions as "obvious facts" or "common sense". Nature is subtle and not obliged to pay lip service to sensibilities that are common. The more dramatic differences between the theories obeying the assumption and those violating the assumption are, the more attention a physicist must pay to the question whether his assumption is actually correct.

Physicists are supposed to find some important or fundamental answers – to construct the big picture. That's why they unavoidably structure their knowledge hierarchically to the "key things" and the "details", and they prefer to care about the former (leaving the latter to the engineers and others). However, separating ideas into "key things" and "details" mindlessly is very risky because the things you consider "details" may very well show that your "key things" are actually wrong, the right "key things" are completely different, and many of the things you consider "details" are not only different than you assumed, but they may actually be some of the "key things" (or the most important "key things"), too!

Of course, I was thinking about very particular examples when I was writing the previous, contrived paragraph. I was thinking about bad or excessively stubborn physicists (if you want me to ignore full-fledged crackpots) and their misconceptions. Those who believe that Nature must have a "realist" description – effectively one from the class of classical theories – may consider all the problems (of the "many worlds interpretation" or any other "realist interpretation" of quantum mechanics) pointed out by others to be just "details". If something doesn't work about these "details", these people believe, those defects will be fixed by some "engineers" in the future.

But most of these objections aren't details at all and it may be seen that no "fix" will ever be possible. They are valid and almost rock-solid proofs that the "key assumptions" of the realists are actually wrong. And if someone or something may overthrow a "key player", then he or it must be a "key player", too. He or it can't be just a "detail"! So if there seems to be some evidence – even if it looks like technical evidence composed of "details" – that actually challenges or disproves your "key assumptions", you simply have to care about it because all your opinions about the truth, along with your separation of questions to the "big ones" and "details", may be completely wrong.

If you don't care about these things, it's too bad and you're very likely to end in the cesspool of religious fanaticism and pseudoscience together with assorted religious bigots and Sean Carrolls.

by Luboš Motl (noreply@blogger.com) at February 27, 2015 06:34 AM

astrobites - astro-ph reader's digest

Corpse too Bright? Make it Bigger!

Title: “Circularization” vs. Accretion — What Powers Tidal Disruption Events?
Authors: T. Piran, G. Svirski, J. Krolik, R. M. Cheng, H. Shiokawa
First Author’s Institution: Racah Institute of Physics, The Hebrew University of Jerusalem, Jerusalem

 

Our day-to-day experiences with gravity are fairly tame. It keeps our GPS satellites close and ready for last-minute changes to an evening outing, brings us the weekend snow and rain that beg for a cozy afternoon curled up warm and dry under covers with a book and a steaming mug, anchors our morning cereal to its rightful place in our bowls (or in our tummies, for that matter), and keeps the Sun in view day after day for millennia on end, nourishing the plants that feed us and radiating upon us its cheering light.  Combined with a patch of slippery ice, gravity may produce a few lingering bruises, and occasionally we’ll hear about the brave adventurers who, in search of higher vistas, slip tragically off an icy slope or an unforgiving cliff.  But all in all, our experiences with gravity in our everyday lives is a largely unnoticed, unheralded hero that works continually behind the scenes to maintain life as we know it.

Park yourself outside a relatively small but massive object such as the supermassive black hole lurking at the center of our galaxy, and you’ll discover sly gravity’s more feral side. Gravity’s inverse square law dependence on your distance from your massive object of choice dictates that as you get closer and closer to said object, the strength of gravity will increase drastically: if you halve your distance to the massive object, the object will pull four times as hard at you, if you quarter your distance towards the object, it’ll pull sixteen times as hard at you, and well, hang on tight to your shoes because you may start to feel them tugging away from your feet. At this point though, you should be high-tailing it as fast as you can away from the massive object rather than attending to your footwear, for if you’re sufficiently close, the difference in the gravitational pull between your head and your feet can be large enough that you’ll stretch and deform into a long string—or “spaghettify” as astronomers have officially termed this painful and gruesome path of no return.

piran2015-shocks

Figure 1. A schematic of the accretion disk created when a star passes too close to a supermassive black hole. The star is ripped up by the black hole, and its remnants form the disk. Shocks (red) generated as stellar material falls onto the disk produce the light we observe. [Figure taken from today’s paper.]

While it doesn’t look like there’ll be a chance for the daredevils among us to visit such an object and test these ideas any time soon, there are other things that have the unfortunate privilege of doing so: stars. If a star passes closely enough to a supermassive black hole so that the star’s self-gravity—which holds it together in one piece—is dwarfed by the difference in the gravitational pull of the black hole on one side to the star to the other, the black hole raises tides on the star (much like the oceanic tides produced by the Moon and the Sun on Earth) can become so large that it deforms until it rips apart.  The star spaghettifies in what astronomers call a tidal disruption event, or TDE, for short. The star-black hole separation below which the star succumbs to such a fate is called its tidal radius (see Nathan’s post for more details on the importance of the tidal radius in TDEs). A star that passes within this distance sprays out large quantities of its hot gas as it spirals to its eventual death in the black hole. But the star doesn’t die silently.  The stream of hot gas it sheds can produce a spectacular light show that can lasts for months. The gas, too, is eventually swallowed by the black hole, but first forms an accretion disk around the black hole that extends up to the tidal radius. The gas violently releases its kinetic energy in shocks that form near what would have been the original star’s point of closest approach (its periapsis) and where the gas wraps around the black hole then collides with the stream of newly infalling stellar gas at the edge of the disk (see Figure 1). It is the energy radiated by these shocks that eventually escape and make their way to our telescopes, where we can observe them—a distant flare at the heart of a neighboring galaxy.

Or so we thought.

 

TDEs, once just a theorist’s whimsy, have catapulted in standing to an observational reality as TDE-esque flares have been observed near our neighborly supermassive black holes. An increasing number of these have been discovered through UV/optical observations (the alternate method being X-rays), which have yielded some disturbing trends that contradict the predictions of the classic TDE picture. These UV/optical TDEs aren’t as luminous as we expect. They aren’t as hot as we thought they would be and many of them stay the same temperature rather than decrease with time. The light we do see seems to come from a region much larger than we expected, and the gas producing the light is moving more slowly than our classic picture suggested. Haven’t thrown in the towel already?

But hang on to your terrycloth—and cue in the authors of today’s paper. Inspired by new detailed simulations of TDEs, they suggested that what we’re seeing in the optical is not the light from shocks in an accretion disk that extends up to the tidal radius, but from a disk that extends about 100 times that distance. Again, shocks from interacting streams of gas—but this time extending up to and at the larger radius—produce the light we observe. The larger disk automatically solves the size problem, and also conveniently solves the velocity problem with it, since Kepler’s laws predict that material would be moving more slowly at the larger radius. This in turn reduces the luminosity of the TDE, which is powered by the loss of kinetic energy (which, of course, scales with the velocity) at the edge of the disk. A larger radius and lower luminosity work to reduce the blackbody temperature of the gas. The authors predicted the change that each of the observations inconsistent with the classic TDE model would undergo under the new model, and found that they agreed well with the measured peak luminosity, temperature, line width (a proxy for the speed of the gas), and estimated size of the emitting region for seven TDEs that had been discovered in the UV/optical, and found good agreement.

But as most theories are wont to do, while this model solves many observational puzzles, it opens another one: these lower luminosity TDEs radiate only 1% of the energy the stellar remains should lose as they are accreted onto the black hole.  So where does the rest of the energy go?  The authors suggest a few different means (photon trapping? outflows? winds? emission at other wavelengths?), but all of them appear unsatisfying for various reasons.  It appears that these stellar corpses will live on in astronomers’ deciphering minds.

by Stacy Kim at February 27, 2015 02:18 AM

February 26, 2015

Quantum Diaries

Twitter, Planck et les supernovae

Matthieu Roman est un jeune chercheur CNRS à Paris, tout à, fait novice sur la twittosphère. Il nous raconte comment il est en pourtant arrivé à twitter « en direct de son labo » pendant une semaine. Au programme : des échanges à bâton rompu à propos de l’expérience Planck, des supernovae ou l’énergie noire, avec un public passionné et assidu. Peut-être le début d’une vocation en médiation scientifique ?

Mais comment en suis-je arrivé là ? Tout a commencé pendant ma thèse de doctorat en cosmologie au Laboratoire Astroparticule et Cosmologie (APC, CNRS/Paris Diderot), sous la direction de Jacques Delabrouille, entre 2011et 2014. Cette thèse m’a amené à faire partie de la grande collaboration scientifique autour du satellite Planck, et en particulier de son instrument à hautes fréquences plus connu sous son acronyme anglais HFI. Je me suis intéressé au cours de ces trois années à l’étude pour la cosmologie des amas de galaxies détectés par Planck à l’aide de « l’effet Sunyaev-Zel’dovich » (interaction des photons du fond diffus cosmologique avec les électrons piégés au sein des amas de galaxies). En mars 2013, j’étais donc aux premières loges au moment de la livraison des données en température de Planck qui ont donné lieu à un emballement médiatique impressionnant. Les résultats démontraient la solidité du modèle cosmologique actuel composé de matière noire froide et d’énergie noire.

A-t-on découvert les ondes gravitationnelles primordiales ?
Puis quelques mois plus tard, les américains de l’expérience BICEP2, située au Pôle Sud, ont convoqué les médias du monde entier afin d’annoncer la découverte des ondes gravitationnelles primordiales grâce à leurs données polarisées. Ils venaient simplement nous apporter le Graal des cosmologistes ! Nouvelle excitation, experts en tous genres invités sur les plateaux télés, dans les journaux pour expliquer que l’on avait détecté ce qu’avait prédit Einstein un siècle plus tôt.

Mais dans la collaboration Planck, nombreux étaient les sceptiques. Nous n’avions pas encore les moyens de répondre à BICEP2 car les données polarisées n’étaient pas encore analysées, mais nous sentions qu’une partie importante du signal polarisé de la poussière galactique n’était pas pris en compte.

Les derniers résultats ont montré une carte de poussière galactique sur laquelle a été rajoutée la direction du champ magnétique galactique. Je la trouve particulièrement belle ! Crédits : ESA - collaboration Planck

Les derniers résultats ont montré une carte de poussière galactique sur laquelle a été rajoutée la direction du champ magnétique galactique. Je lui trouve un aspect particulièrement artistique ! Crédits : ESA- collaboration Planck

Et voilà : depuis quelques jours, c’est officiel ! Planck, dans une étude conjointe avec BICEP2 et Keck, fixe une limite supérieure sur la quantité d’ondes gravitationnelles primordiales, et par conséquent pas de détection. En somme, retour à la case départ, mais avec beaucoup d’informations supplémentaires. Les futures missions spatiales, ou expériences au sol ou en ballon visant à détecter avec une grande précision la polarisation du fond diffus à grande échelle, dont l’intérêt aurait pu être remis en question si BICEP2 avait eu raison, viennent de prendre à nouveau tout leur sens. Car il faudra bien aller les chercher, ces ondes gravitationnelles primordiales, avec un nombre de détecteurs embarqués de plus en plus grand afin d’augmenter la sensibilité, et la capacité de confirmer à coup sûr l’origine cosmologique de tout signal détecté !

De la poussière galactique aux explosions d’étoiles
Entre temps, j’ai eu l’opportunité de prolonger mon activité de recherche pendant trois années supplémentaires avec un post-doctorat au Laboratoire de physique nucléaire et des hautes énergies (CNRS, Université Pierre et Marie Curie et Université Paris Diderot) sur un sujet complètement nouveau à mes yeux : les supernovae, ces étoiles en fin de vie dont l’explosion est très lumineuse. On les étudie dans le but ultime de connaître précisément la nature de l’énergie noire, tenue responsable de l’expansion accélérée de l’Univers. Au temps de la preuve de l’existence de l’énergie noire obtenue à l’aide des supernovae (1999), on imaginait que leur courbe de lumière était assez peu variable. On a pris d’ailleurs l’habitude de les appeler « chandelles standard ».

Sur cette  image de la galaxie M101 on peut voir distinctement une supernova qui a explosé en 2011 : c'est le gros point blanc en haut à droite. Crédit T.A. Rector (University of Alaska Anchorage), H. Schweiker & S. Pakzad NOAO/AURA/NSF

Sur cette image de la galaxie M101 on peut voir distinctement une supernova qui a explosé en 2011 : c’est le gros point blanc en haut à droite. Celle-ci se situe dans l’un des bras spiraux, mais ne brillerait pas de la même façon si elle était au centre. Crédit T.A. Rector (University of Alaska Anchorage), H. Schweiker & S. Pakzad NOAO/AURA/NSF

Avec l’affinement des méthodes de détection, on se rend compte que les supernovae ne sont pas vraiment les chandelles standard que l’on croit, ce qui relance complètement l’intérêt du domaine. En particulier, le type de galaxie dans laquelle explose une supernova peut créer des variations de luminosité, et ainsi affecter la mesure du paramètre décrivant la nature de l’énergie noire. C’est le projet dans lequel je me suis lancé au sein de la (petite) collaboration du Supernova Legacy Survey (SNLS). En espérant un jour pouvoir étudier ces objets sous la forme d’autres projets scientifiques, avec des détecteurs encore plus puissants comme Subaru ou LSST.

Twitter en direct de mon labo…
En fait c’est une amie, Agnès, qui m’a fait découvrir Twitter et m’a encouragé à raconter mon travail au jour le jour et pendant une semaine via le compte @EnDirectDuLabo. Il s’agissait d’un monde nouveau pour moi, qui n’était pas du tout actif sur ce que l’on appelle « la twittosphère ». C’est malheureusement le cas pour de nombreux chercheurs en France. Expérience très enrichissante s’il en est, puisqu’elle semble susciter l’intérêt de nombreux twittos, et a permis de porter le nombre d’abonnés à plus de 2000. Cela m’a permis par exemple d’expliquer les bases de l’électromagnétisme nécessaires en astronomie, des détails plus techniques sur les performances de l’expérience dans laquelle je travaille ou encore ma vie au quotidien dans mon laboratoire.

Ce fut très amusant de livrer mon travail quotidien au grand public, mais aussi très chronophage ! J’ai toujours été convaincu par l’importance de la médiation scientifique, sans jamais oser me lancer. Il était peut-être temps…

Matthieu Roman est actuellement post-doctorant au Laboratoire de physique nucléaire et de hautes énergies (CNRS, Université Pierre et Marie Curie et Université Paris Diderot)

by CNRS-IN2P3 at February 26, 2015 11:01 PM

The n-Category Cafe

Introduction to Synthetic Mathematics (part 1)

John is writing about “concepts of sameness” for Elaine Landry’s book Category Theory for the Working Philosopher, and has been posting some of his thoughts and drafts. I’m writing for the same book about homotopy type theory / univalent foundations; but since HoTT/UF will also make a guest appearance in John’s and David Corfield’s chapters, and one aspect of it (univalence) is central to Steve Awodey’s chapter, I had to decide what aspect of it to emphasize in my chapter.

My current plan is to focus on HoTT/UF as a synthetic theory of <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-groupoids. But in order to say what that even means, I felt that I needed to start with a brief introduction about the phrase “synthetic theory”, which may not be familiar. Right now, my current draft of that “introduction” is more than half the allotted length of my chapter; so clearly it’ll need to be trimmed! But I thought I would go ahead and post some parts of it in its current form; so here goes.

In general, mathematical theories can be classified as analytic or synthetic. An analytic theory is one that analyzes, or breaks down, its objects of study, revealing them as put together out of simpler things, just as complex molecules are put together out of protons, neutrons, and electrons. For example, analytic geometry analyzes the plane geometry of points, lines, etc. in terms of real numbers: points are ordered pairs of real numbers, lines are sets of points, etc. Mathematically, the basic objects of an analytic theory are defined in terms of those of some other theory.

By contrast, a synthetic theory is one that synthesizes, or puts together, a conception of its basic objects based on their expected relationships and behavior. For example, synthetic geometry is more like the geometry of Euclid: points and lines are essentially undefined terms, given meaning by the axioms that specify what we can do with them (e.g. two points determine a unique line). (Although Euclid himself attempted to define “point” and “line”, modern mathematicians generally consider this a mistake, and regard Euclid’s “definitions” (like “a point is that which has no part”) as fairly meaningless.) Mathematically, a synthetic theory is a formal system governed by rules or axioms. Synthetic mathematics can be regarded as analogous to foundational physics, where a concept like the electromagnetic field is not “put together” out of anything simpler: it just is, and behaves in a certain way.

The distinction between analytic and synthetic dates back at least to Hilbert, who used the words “genetic” and “axiomatic” respectively. At one level, we can say that modern mathematics is characterized by a rich interplay between analytic and synthetic — although most mathematicians would speak instead of definitions and examples. For instance, a modern geometer might define “a geometry” to satisfy Euclid’s axioms, and then work synthetically with those axioms; but she would also construct examples of such “geometries” analytically, such as with ordered pairs of real numbers. This approach was pioneered by Hilbert himself, who emphasized in particular that constructing an analytic example (or model) proves the consistency of the synthetic theory.

However, at a deeper level, almost all of modern mathematics is analytic, because it is all analyzed into set theory. Our modern geometer would not actually state her axioms the way that Euclid did; she would instead define a geometry to be a set <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics> of points together with a set <semantics>L<annotation encoding="application/x-tex">L</annotation></semantics> of lines and a subset of <semantics>P×L<annotation encoding="application/x-tex">P\times L</annotation></semantics> representing the “incidence” relation, etc. From this perspective, the only truly undefined term in mathematics is “set”, and the only truly synthetic theory is Zermelo–Fraenkel set theory (ZFC).

This use of set theory as the common foundation for mathematics is, of course, of 20th century vintage, and overall it has been a tremendous step forwards. Practically, it provides a common language and a powerful basic toolset for all mathematicians. Foundationally, it ensures that all of mathematics is consistent relative to set theory. (Hilbert’s dream of an absolute consistency proof is generally considered to have been demolished by Gödel’s incompleteness theorem.) And philosophically, it supplies a consistent ontology for mathematics, and a context in which to ask metamathematical questions.

However, ZFC is not the only theory that can be used in this way. While not every synthetic theory is rich enough to allow all of mathematics to be encoded in it, set theory is by no means unique in possessing such richness. One possible variation is to use a different sort of set theory like ETCS, in which the elements of a set are “featureless points” that are merely distinguished from each other, rather than labeled individually by the elaborate hierarchical membership structures of ZFC. Either sort of “set” suffices just as well for foundational purposes, and moreover each can be interpreted into the other.

However, we are now concerned with more radical possibilities. A paradigmatic example is topology. In modern “analytic topology”, a “space” is defined to be a set of points equipped with a collection of subsets called open, which describe how the points vary continuously into each other. (Most analytic topologists, being unaware of synthetic topology, would call their subject simply “topology.”) By contrast, in synthetic topology we postulate instead an axiomatic theory, on the same ontological level as ZFC, whose basic objects are spaces rather than sets.

Of course, by saying that the basic objects “are” spaces we do not mean that they are sets equipped with open subsets. Instead we mean that “space” is an undefined word, and the rules of the theory cause these “spaces” to behave more or less like we expect spaces to behave. In particular, synthetic spaces have open subsets (or, more accurately, open subspaces), but they are not defined by specifying a set together with a collection of open subsets.

It turns out that synthetic topology, like synthetic set theory (ZFC), is rich enough to encode all of mathematics. There is one trivial sense in which this is true: among all analytic spaces we find the subclass of indiscrete ones, in which the only open subsets are the empty set and the whole space. A notion of “indiscrete space” can also be defined in synthetic topology, and the collection of such spaces forms a universe of ETCS-like sets (we’ll come back to these in later installments). Thus we could use them to encode mathematics, entirely ignoring the rest of the synthetic theory of spaces. (The same could be said about the discrete spaces, in which every subset is open; but these are harder (though not impossible) to define and work with synthetically. The relation between the discrete and indiscrete spaces, and how they sit inside the synthetic theory of spaces, is central to the synthetic theory of cohesion, which I believe David is going to mention in his chapter about the philosophy of geometry.)

However, a less boring approach is to construct the objects of mathematics directly as spaces. How does this work? It turns out that the basic constructions on sets that we use to build (say) the set of real numbers have close analogues that act on spaces. Thus, in synthetic topology we can use these constructions to build the space of real numbers directly. If our system of synthetic topology is set up well, then the resulting space will behave like the analytic space of real numbers (the one that is defined by first constructing the mere set of real numbers and then equipping it with the unions of open intervals as its topology).

The next question is, why would we want to do mathematics this way? There are a lot of reasons, but right now I believe they can be classified into three sorts: modularity, philosophy, and pragmatism. (If you can think of other reasons that I’m forgetting, please mention them in the comments!)

By “modularity” I mean the same thing as does a programmer: even if we believe that spaces are ultimately built analytically out of sets, it is often useful to isolate their fundamental properties and work with those abstractly. One advantage of this is generality. For instance, any theorem proven in Euclid’s “neutral geometry” (i.e. without using the parallel postulate) is true not only in the model of ordered pairs of real numbers, but also in the various non-Euclidean geometries. Similarly, a theorem proven in synthetic topology may be true not only about ordinary topological spaces, but also about other variant theories such as topological sheaves, smooth spaces, etc. As always in mathematics, if we state only the assumptions we need, our theorems become more general.

Even if we only care about one model of our synthetic theory, modularity can still make our lives easier, because a synthetic theory can formally encapsulate common lemmas or styles of argument that in an analytic theory we would have to be constantly proving by hand. For example, just as every object in synthetic topology is “topological”, every “function” between them automatically preserves this topology (is “continuous”). Thus, in synthetic topology every function <semantics><annotation encoding="application/x-tex">\mathbb{R}\to \mathbb{R}</annotation></semantics> is automatically continuous; all proofs of continuity have been “packaged up” into the single proof that analytic topology is a model of synthetic topology. (We can still speak about discontinuous functions too, if we want to; we just have to re-topologize <semantics><annotation encoding="application/x-tex">\mathbb{R}</annotation></semantics> indiscretely first. Thus, synthetic topology reverses the situation of analytic topology: discontinuous functions are harder to talk about than continuous ones.)

By contrast to the argument from modularity, an argument from philosophy is a claim that the basic objects of mathematics really are, or really should be, those of some particular synthetic theory. Nowadays it is hard to find mathematicians who hold such opinions (except with respect to set theory), but historically we can find them taking part in the great foundational debates of the early 20th century. It is admittedly dangerous to make any precise claims in modern mathematical language about the beliefs of mathematicians 100 years ago, but I think it is justified to say that in hindsight, one of the points of contention in the great foundational debates was which synthetic theory should be used as the foundation for mathematics, or in other words what kind of thing the basic objects of mathematics should be. Of course, this was not visible to the participants, among other reasons because many of them used the same words (such as “set”) for the basic objects of their theories. (Another reason is that among the points at issue was the very idea that a foundation of mathematics should be built on precisely defined rules or axioms, which today most mathematicians take for granted.) But from a modern perspective, we can see that (for instance) Brouwer’s intuitionism is actually a form of synthetic topology, while Markov’s constructive recursive mathematics is a form of “synthetic computability theory”.

In these cases, the motivation for choosing such synthetic theories was clearly largely philosophical. The Russian constructivists designed their theory the way they did because they believed that everything should be computable. Similarly, Brouwer’s intuitionism can be said to be motivated by a philosophical belief that everything in mathematics should be continuous.

(I wish I could write more about the latter, because it’s really interesting. The main thing that makes Brouwerian intuitionism non-classical is choice sequences: infinite sequences in which each element can be “freely chosen” by a “creating subject” rather than being supplied by a rule. The concrete conclusion Brouwer drew from this is that any operation on such sequences must be calculable, at least in stages, using only finite initial segments, since we can’t ask the creating subject to make an infinite number of choices all at once. But this means exactly that any such operation must be continuous with respect to a suitable topology on the space of sequences. It also connects nicely with the idea of open sets as “observations” or “verifiable statements” that was mentioned in another thread. However, from the perspective of my chapter for the book, the purpose of this introduction is to lay the groundwork for discussing HoTT/UF as a synthetic theory of <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-groupoids, and Brouwerian intuitionism would be a substantial digression.)

Finally, there are arguments from pragmatism. Whereas the modularist believes that the basic objects of mathematics are actually sets, and the philosophist believes that they are actually spaces (or whatever), the pragmatist says that they could be anything: there’s no need to commit to a single choice. Why do we do mathematics, anyway? One reason is because we find it interesting or beautiful. But all synthetic theories may be equally interesting and beautiful (at least to someone), so we may as well study them as long as we enjoy it.

Another reason we study mathematics is because it has some application outside of itself, e.g. to theories of the physical world. Now it may happen that all the mathematical objects that arise in some application happen to be (say) spaces. (This is arguably true of fundamental physics. Similarly, in applications to computer science, all objects that arise may happen to be computable.) In this case, why not just base our application on a synthetic theory that is good enough for the purpose, thereby gaining many of the advantages of modularity, but without caring about how or whether our theory can be modeled in set theory?

It is interesting to consider applying this perspective to other application domains. For instance, we also speak of sets outside of a purely mathematical framework, to describe collections of physical objects and mental acts of categorization; could we use spaces in the same way? Might collections of objects and thoughts automatically come with a topological structure by virtue of how they are constructed, like the real numbers do? I think this also starts to seem quite natural when we imagine topology in terms of “observations” or “verifiable statetments”. Again, saying any more about that in my chapter would be a substantial digression; but I’d be interested to hear any thoughts about it in the comments here!

by shulman (viritrilbia@gmail.com) at February 26, 2015 08:26 PM

Clifford V. Johnson - Asymptotia

Ceiba Speciosa
Saw all the fluffy stuff on the ground. Took me a while to "cotton on" and look up: silk_floss_tree (ceiba speciosa.. "silk floss" tree. click for larger view.) -cvj Click to continue reading this post

by Clifford at February 26, 2015 05:29 PM

astrobites - astro-ph reader's digest

Black Holes Grow First in Mergers

Title: Following Black Hole Scaling Relations Through Gas-Rich Mergers
Authors: Anne M. Medling, Vivian U, Claire E. MaxDavid B. Sanders, Lee Armus, Bradford Holden, Etsuko Mieda, Shelley A. Wright, James E. Larkin
First Author’s Institution: Research School of Astronomy & Astrophysics, Mount Stromlo Observatory, Australia National University, Cotter Road, Weston, ACT 2611, Australia
Status: Accepted to ApJ

It’s currently accepted theory that every galaxy has a super massive black hole (SMBH) at it’s center. These black holes have been observed to be strongly correlated with the galaxy’s bulge mass, total stellar mass and velocity dispersion.

Figure 1: NGC 2623 - one of the merging galaxies observed in this study. Image credit: NASA.

Figure 1: NGC 2623 – one of the merging galaxies observed in this study. Image credit: NASA.

The mechanism which drives this has long thought to be mergers (although there are recent findings showing the bulgeless galaxies which have not undergone a merger also host high mass SMBHs) which causes a funneling of gas into the center of a galaxy, which is either used in a burst of star formation or accreted by the black hole. The black hole itself can regulate both its and the galaxy’s growth if it becomes active and throws off huge winds which expel the gas needed for star formation and black hole growth on short timescales.

To understand the interplay between these effects the authors of this paper study 9 nearby ultra luminous infrared galaxies in a range of stages through a merger and measure the mass of the black holes at the center of each. They calculated these masses from spectra taken with the Keck telescopes on Mauna Kea, Hawaii by measuring the stellar kinematics (the movement of the stars around the black hole) as shown by the doppler broadening of the emission lines in the spectra. Doppler broadening occurs when gas is emitting light at a very specific wavelength but is also moving either towards or away from us (or both if it is rotating around a central object). Some of this emission is doppler shifted to larger or smaller wavelengths effectively smearing (or broadening) a narrow emission line into a broad one.

Figure 1: The mass of the black hole against the stellar velocity dispersion, sigma, of the 9 galaxies observed in this study. Also shown are galaxies from McConnel & Ma (2013) and the best fit line to that data as a comparison to typical galaxies.

Figure 2: The mass of the black hole against the stellar velocity dispersion, sigma, of the 9 galaxies observed in this study. Also shown are galaxies from McConnel & Ma (2013) and the best fit line to that data as a comparison to typical galaxies. Originally Figure 2 in Medling et al. (2015)

From this estimate of the rotational velocities of the stars around the centre of the galaxy, the mass of the black hole and the velocity dispersion can be calculated. These measurements for the 9 galaxies in this study are plotted in Figure 1 (originally Fig. 2 in Medling et al. 2015) and are shown to be either within the scatter or above the typical relationship between black hole mass and velocity dispersion.

The authors run a Kolmogorov-Smirnoff statistical test on the data to confirm that these merging galaxies are drawn from a completely different population to those that lie on the relation with a p-value of 0.003, i.e. the likelihood of the these merging galaxies being drawn from the same population as the typical galaxies is 0.3%.

The black holes therefore have a larger mass than they should for the stellar velocity dispersion in the galaxy. This suggests that the black hole grows first in a merger before the bulges of the two galaxies have fully merged and settled into a gravitationally stable structure (virialized). Although measuring the velocity dispersion in a bulge consisting of two bulges merging is difficult and can produce errors in the measurement, simulations have shown that the velocity dispersion will only be underestimated by 50%; an amount which is not significant enough to change these results.

The authors also consider whether there has been enough time since the merger began for these black holes to grow so massive. Assuming that both galaxies used to lay on typical scaling relations for the black hole mass, and that the black holes accreted at the typical rate (Eddington rate), they find that it should have taken somewhere in the range of a few 10-100  million years – a time much less than the simulated time for a merger to happen.

A second consideration is how long it will take for these galaxies to virialize and for their velocity dispersion to increase to bring each one back onto the typical scaling relation with the black hole mass. They consider how many more stars are needed to form in order for the velocity dispersion in the bulge to reach the required value. Taking measured star formation rates of these galaxies gives a range of timescales of about 1-2 billion years which is consistent with simulated merger timescales. It is therefore plausible that these galaxies can return to the black hole mass-velocity dispersion relation by the time they have finished merging.

The authors conclude therefore that black hole fueling and growth begins in the early stages of a merger and can outpace the formation of the bulge and any bursts in star formation. To confirm this result measurements of a much larger sample of currently merging galaxies needs to be taken – the question is, where do we look?

by Becky Smethurst at February 26, 2015 02:11 PM

Symmetrybreaking - Fermilab/SLAC

From the Standard Model to space

A group of scientists who started at particle physics experiments move their careers to the final frontier.

As a member of the ATLAS experiment at the Large Hadron Collider, Ryan Rios spent 2007 to 2012 surrounded by fellow physicists.

Now, as a senior research engineer for Lockheed Martin at NASA’s Johnson Space Center, he still sees his fair share.

He’s not the only scientist to have made the leap from experimenting on Earth to keeping astronauts safe in space. Rios works on a small team that includes colleagues with backgrounds in physics, biology, radiation health, engineering, information technology and statistics.

“I didn’t really leave particle physics, I just kind of changed venues,” Rios says. “A lot of the skillsets I developed on ATLAS I was able to transfer over pretty easily.”

The group at Johnson Space Center supports current and planned crewed space missions by designing, testing and monitoring particle detectors that measure radiation levels in space.

Massive solar flares and other solar events that accelerate particles, other sources of cosmic radiation, and weak spots in Earth’s magnetic field can all pose radiation threats to astronauts. Members of the radiation group provide advisories on such sources. This makes it possible to warn astronauts, who can then seek shelter in heavier-shielded areas of the spacecraft.

Johnson Space Center has a focus on training and supporting astronauts and planning for future crewed missions. Rios has done work for the International Space Station and the robotic Orion mission that launched in December as a test for future crewed missions. His group recently developed a new radiation detector for the space station crew.

Rios worked at CERN for four years as a graduate student and postdoc at Southern Methodist University in Dallas. At CERN he was introduced to a physics analysis platform called ROOT, which is also used at NASA. Some of the particle detectors he works with now were developed by a CERN-based collaboration.

Fellow Johnson Space Center worker Kerry Lee wound up a group lead for radiation operations after using ROOT during his three years as a summer student on the Collider Detector at Fermilab, or CDF experiment.

“As a kid, I just knew I wanted to work at NASA,” says Lee, who grew up in rural Wyoming. He pursued an education in engineering physics and “enjoyed the physics part more than the engineering.” He received a master’s degree in particle physics at Texas Tech University.

A professor there helped him attain his current position. “He asked me what I really wanted to do in life,” Lee says, “and I told him, ‘NASA.’”

He worked on data analysis for a detector aboard the robotic Mars Odyssey mission, which flew in 2001. “The tools I learned at Fermilab for data analysis were perfectly applicable for the analysis on this detector,” he says.

One of his most enjoyable roles was training astronauts to use radiation-monitoring equipment in space.

“Every one of the crew members would come through [for training],” he says. “Meeting the astronauts is very exciting—it is always a diverse and interesting group of people. I really enjoy that part of the job.”

Physics was also the starting point for Martin Leitgab, a senior research engineer who joined the Johnson Space Center group in 2013. As a PhD student, Leitgab worked at the PHENIX detector at Brookhaven National Laboratory’s Relativistic Heavy Ion Collider. He also took part in the Belle Collaboration at the KEK B-factory in Japan.

A native of Austria who had attended the University of Illinois at Urbana-Champaign, Leitgab says his path to NASA was fairly roundabout.

“When I finished my PhD work I was at a crossroads—I did not have a master plan,” he says.

He says he became interested in aerospace and wrote some papers related to solar power in space. His wife is from Texas, so Johnson Space Center seemed to be a good fit.

“My job is to make sure that the detector built for the International Space Station works as it should, and to get data out of it,” he says. “It’s very similar to what I did before… The hardware is very different, but the experimental approach in testing and debugging detectors, debugging the software that reads out the data from the detectors and determining the system efficiency and calibration—that’s pretty much a one-to-one comparison with high-energy physics detectors work.”

Leitgab, Lee and Rios all say the small teams and tight, product-driven deadlines at NASA represent a departure from the typically massive collaborations for major particle physics experiments. But other things are very familiar: For example, NASA’s extensive collection of acronyms.

Rios says he relishes his new role but is glad to have worked on one of the experiments that in 2012 discovered the Higgs boson. “At the end of the day, I had the opportunity to work on a very huge discovery—probably the biggest one of the 21st century we’ll see,” he says.

 

Like what you see? Sign up for a free subscription to symmetry!

by Glenn Roberts Jr. at February 26, 2015 02:00 PM

February 25, 2015

astrobites - astro-ph reader's digest

Open Educational Resources for Astronomy

Last Spring I taught an introductory astronomy course for non-science majors. It was difficult and fun. One of the most difficult parts was inventing activities and homework to teach specific concepts. Sometimes my activities fell flat. Thankfully, I had access to a number of astronomy education resources: the textbook, a workbook full of in-class tutorials, and the professors in my department who had previously taught introductory astronomy.

Open educational resources are meant to serve in this capacity, especially for teachers and students without the money to buy expensive texts. Like open source software, open educational resources are publicly-licensed and distributed in a format that encourages improvement and evolution. For examples, check out the resources hosted by Wikiversity. This is a sister project of Wikipedia’s, providing learning materials and a wiki forum to edit and remix those materials. It’s great for teachers in all disciplines! And it hosts a lot of astronomy material. But like Wikipedia, it’s deep and wide. It’s easy to get lost. (“Wait, why am I reading about R2D2?”) And like articles on Wikipedia, the learning materials on Wikiversity vary in quality.

Today’s paper introduces a project called astroEDU. They’re aiming to make astronomy learning resources, like those you can find on Wikiversity, easier to find and of higher quality. To do this, the authors introduce a peer-review structure for education materials modeled on the one widely-accepted for scholarly research. Educators may submit a learning activity to the astroEDU website. The project is evaluated by two blind reviewers, an educator and an astronomer. It may go through revision, or it may be scrapped. If it’s not scrapped, it’s published on the website, and sister sites like Open Educational Resources Commons. The result is a simple, excellent lesson plan describing the learning goals, any objects you need to complete the activity, step by step instructions, and ideas to find out what your students learned.

Screenshot from "Star in a Box", an educational activity freely available at astroEDU.

Screenshot from “Star in a Box”, an educational activity available at astroEDU.

To the right is a screenshot from an example activity, “Star in a Box“, which won an award last year from the Community for Science Education in Europe. It uses a web-based simulation tool developed by the Las Cumbres Observatory. Students are directed to vary the initial mass of a model star and explore its evolution in the Hertzsprung-Russell plane. This is the kind of thing I could have used to supplement the textbook in my introductory astronomy course. And so could a high school teacher struggling along without any textbooks.

AstroEDU is targeted at primary and secondary school teachers. It was launched only a year ago, supported by the Inernational Astronomical Union’s Office for Astronomy Development. It may grow into a powerful tool for open educational resources, something like a peer-reviewed Wikiversity. If you are a professional astronomer or an educator, it looks like you can help by signing up as a volunteer reviewer.

by Brett Deaton at February 25, 2015 08:35 PM

Jester - Resonaances

Persistent trouble with bees
No, I still have nothing to say about colony collapse disorder... this blog will stick to physics for at least 2 more years. This is an update on the anomalies in B decays reported by the LHCbee experiment. The two most important ones are:

  1. The  3.7 sigma deviation from standard model predictions in the differential distribution of the B➝K*μ+μ- decay products.
  2.  The 2.6 sigma violation of lepton flavor universality in B+→K+l+l- decays. 

 The first anomaly is statistically more significant. However, the theoretical error of the standard model prediction is not trivial to estimate and the significance of the anomaly is subject to fierce discussions. Estimates in the literature range from 4.5 sigma to 1 sigma, depending on what is assumed about QCD uncertainties. For this reason, the second anomaly made this story much more intriguing.  In that case, LHCb measures the ratio of the decay with muons and with electrons:  B+→K+μ+μ- vs B+→K+e+e-. This observable is theoretically clean, as large QCD uncertainties cancel in the ratio. Of course, 2.7 sigma significance is not too impressive; LHCb once had a bigger anomaly (remember CP violation in D meson decays?)  that is now long gone. But it's fair to say that the two anomalies together are marginally interesting.      

One nice thing is that both anomalies can be explained at the same time by a simple modification of the standard model. Namely, one needs to add the 4-fermion coupling between a b-quark, an s-quark, and two muons:

with Λ of order 30 TeV. Just this one extra coupling greatly improves a fit to the data, though other similar couplings could be simultaneously present. The 4-fermion operators can be an effective description of new heavy particles coupled to quarks and leptons.  For example, a leptoquark (scalar particle with a non-zero color charge and lepton number) or a Z'  (neutral U(1) vector boson) with mass in a few TeV range have been proposed. These are of course simple models created ad-hoc. Attempts to put these particles in a bigger picture of physics beyond  the standard model have not been very convincing so far, which may be one reason why the anomalies are viewed a bit skeptically. The flip side is that, if the anomalies turn out to be real, this will point to unexpected symmetry structures around the corner.

Another nice element of this story is that it will be possible to acquire additional relevant information in the near future. The first anomaly is based on just 1 fb-1 of LHCb data, and it will be updated to full 3 fb-1 some time this year. Furthermore, there are literally dozens of other B decays where the 4-fermion operators responsible for the anomalies could  also show up. In fact, there may already be some hints that this is happening. In the table borrowed from this paper we can see that there are several other  2-sigmish anomalies in B-decays that may possibly have the same origin. More data and measurements in  more decay channels should clarify the picture. In particular, violation of lepton flavor universality may come together with lepton flavor violation.  Observation of decays forbidden in the standard model, such as B→Keμ or  B→Kμτ, would be a spectacular and unequivocal signal of new physics.

by Jester (noreply@blogger.com) at February 25, 2015 08:32 PM

arXiv blog

Data Mining Indian Recipes Reveals New Food Pairing Phenomenon

By studying the network of links between Indian recipes, computer scientists have discovered that the presence of certain spices makes a meal much less likely to contain ingredients with flavors in common.


The food pairing hypothesis is the idea that ingredients that share the same flavors ought to combine well in recipes. For example, the English chef Heston Blumenthal discovered that white chocolate and caviar share many flavors and turn out to be a good combination. Other unusual combinations that seem to confirm the hypothesis include strawberries and peas, asparagus and butter, and chocolate and blue cheese.

February 25, 2015 06:05 PM

The n-Category Cafe

Concepts of Sameness (Part 3)

Now I’d like to switch to pondering different approaches to equality. (Eventually I’ll have put all these pieces together into a coherent essay, but not yet.)

We tend to think of <semantics>x=x<annotation encoding="application/x-tex">x = x</annotation></semantics> as a fundamental property of equality, perhaps the most fundamental of all. But what is it actually used for? I don’t really know. I sometimes joke that equations of the form <semantics>x=x<annotation encoding="application/x-tex">x = x</annotation></semantics> are the only really true ones — since any other equation says that different things are equal — but they’re also completely useless.

But maybe I’m wrong. Maybe equations of the form <semantics>x=x<annotation encoding="application/x-tex">x = x</annotation></semantics> are useful in some way. I can imagine one coming in handy at the end of a proof by contradiction where you show some assumptions imply <semantics>xx<annotation encoding="application/x-tex">x \ne x</annotation></semantics>. But I don’t remember ever doing such a proof… and I have trouble imagining that you ever need to use a proof of this style.

If you’ve used the equation <semantics>x=x<annotation encoding="application/x-tex">x = x</annotation></semantics> in your own work, please let me know.

To explain my question a bit more precisely, it will help to choose a specific formalism: first-order classical logic with equality. We can get the rules for this system by taking first-order classical logic with function symbols and adding a binary predicate “<semantics>=<annotation encoding="application/x-tex">=</annotation></semantics>” together with three axiom schemas:

1. Reflexivity: for each variable <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics>,

<semantics>x=x<annotation encoding="application/x-tex"> x = x </annotation></semantics>

2. Substitution for functions: for any variables <semantics>x,y<annotation encoding="application/x-tex">x, y</annotation></semantics> and any function symbol <semantics>f<annotation encoding="application/x-tex">f</annotation></semantics>,

<semantics>x=yf(,x,)=f(,y,)<annotation encoding="application/x-tex"> x = y \; \implies\; f(\dots, x, \dots) = f(\dots, y, \dots) </annotation></semantics>

3. Substitution for formulas: For any variables <semantics>x,y<annotation encoding="application/x-tex">x, y</annotation></semantics> and any formula <semantics>ϕ<annotation encoding="application/x-tex">\phi</annotation></semantics>, if <semantics>ϕ<annotation encoding="application/x-tex">\phi'</annotation></semantics> is obtained by replacing any number of free occurrences of <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> in <semantics>ϕ<annotation encoding="application/x-tex">\phi</annotation></semantics> with <semantics>y<annotation encoding="application/x-tex">y</annotation></semantics>, such that these remain free occurrences of <semantics>y<annotation encoding="application/x-tex">y</annotation></semantics>, then

<semantics>x=y(ϕϕ)<annotation encoding="application/x-tex"> x = y \;\implies\; (\phi \;\implies\; \phi') </annotation></semantics>

Where did symmetry and transitivity of equality go? They can actually be derived!

For transitivity, use ‘substitution for formulas’ and take <semantics>ϕ<annotation encoding="application/x-tex">\phi</annotation></semantics> to be <semantics>x=z<annotation encoding="application/x-tex">x = z</annotation></semantics>, so that <semantics>ϕ<annotation encoding="application/x-tex">\phi'</annotation></semantics> is <semantics>y=z<annotation encoding="application/x-tex">y = z</annotation></semantics>. Then we get

<semantics>x=y(x=zy=z)<annotation encoding="application/x-tex"> x=y \;\implies\; (x = z \;\implies\; y = z) </annotation></semantics>

This is almost transitivity. From this we can derive

<semantics>(x=y&x=z)y=z<annotation encoding="application/x-tex"> (x = y \;\&\; x = z) \;\implies\; y = z </annotation></semantics>

and from this we can derive the usual statement of transitivity

<semantics>(x=y&y=z)x=z<annotation encoding="application/x-tex"> (x = y\; \& \; y = z) \;\implies\; x = z </annotation></semantics>

by choosing different names of variables and using symmetry of equality.

But how do we get symmetry? We can derive this using reflexivity and substitution for formulas. Take <semantics>ϕ<annotation encoding="application/x-tex">\phi</annotation></semantics> to be <semantics>x=x<annotation encoding="application/x-tex">x = x</annotation></semantics> and take <semantics>ϕ<annotation encoding="application/x-tex">\phi'</annotation></semantics> be the result of substituting the first instance of <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> with <semantics>y<annotation encoding="application/x-tex">y</annotation></semantics>: that is, <semantics>y=x<annotation encoding="application/x-tex">y = x</annotation></semantics>. Then we get

<semantics>x=y(x=xy=x)<annotation encoding="application/x-tex"> x = y \;\implies \;(x = x \;\implies \;y = x) </annotation></semantics>

Using <semantics>x=x<annotation encoding="application/x-tex">x = x</annotation></semantics>, we can derive

<semantics>x=yy=x<annotation encoding="application/x-tex"> x = y \;\implies\; y = x </annotation></semantics>

This is the only time I remember using <semantics>x=x<annotation encoding="application/x-tex">x = x</annotation></semantics> to derive something! So maybe this equation is good for something. But if proving symmetry and transitivity of equality is the only thing it’s good for, I’m not very impressed. I would have been happy to take both of these as axioms, if necessary. After all, people often do.

So, just to get the conversation started, I’ll conjecture that reflexivity of equality is completely useless if we include symmetry of equality in our axioms. Namely:

Conjecture. Any theorem in classical first-order logic with equality that does not include a subformula of the form <semantics>x=x<annotation encoding="application/x-tex">x = x</annotation></semantics> for any variable <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> can also be derived from a variant where we drop reflexivity, keep substitution for functions and substitution for formulas, and add this axiom schema:

1’. Symmetry: for any variables <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> and <semantics>y<annotation encoding="application/x-tex">y</annotation></semantics>,

<semantics>x=yy=x<annotation encoding="application/x-tex"> x = y \; \implies \; y = x </annotation></semantics>

Proof theorists: can you show this is true, or find a counterexample? We’ve seen that we can get transitivity from this setup, and then I don’t really see how it hurts to omit <semantics>x=x<annotation encoding="application/x-tex">x = x</annotation></semantics>. I may be forgetting something, though!

by john (baez@math.ucr.edu) at February 25, 2015 04:10 PM

The n-Category Cafe

Concepts of Sameness (Part 2)

I’m writing about ‘concepts of sameness’ for Elaine Landry’s book Category Theory for the Working Philosopher. After an initial section on a passage by Heraclitus, I had planned to write a bit about Gongsun Long’s white horse paradox — or more precisely, his dialog When a White Horse is Not a Horse.

However, this is turning out to be harder than I thought, and more of a digression than I want. So I’ll probably drop this plan. But I have a few preliminary notes, and I might as well share them.

Gongsun Long

Gongsun Long was a Chinese philosopher who lived from around 325 to 250 BC. Besides the better-known Confucian and Taoist schools of Chinese philosophy, another important school at this time was the Mohists, who were more interested in science and logic. Gongsun Long is considered a member of the Mohist-influenced ‘School of Names’: a loose group of logicians, not really a school in any real sense. They are remembered largely for their paradoxes: for example, they independently invented a version of Zeno’s paradox.

As with Heraclitus, most of Gongsun Long’s writings are lost. Joseph Needham [N] has written that this is one of the worst losses of ancient Chinese texts, which in general have survived much better than the Greek ones. The Gongsun Longzi is a text that originally contained 14 of his essays. Now only six survive. The second essay discusses the question “when is a white horse not a horse?”

The White Horse Paradox

When I first heard this ‘paradox’ I didn’t get it: it just seemed strange and silly, not a real paradox. I’m still not sure I get it. But I’ve decided that’s what makes it interesting: it seems to rely on modes of thought, or speech, that are quite alien to me. What counts as a ‘paradox’ is more culturally specific than you might realize.

If a few weeks ago you’d asked me how the paradox goes, I might have said something like this:

A white horse is not a horse, because where there is whiteness, there cannot be horseness, and where there is horseness there cannot be whiteness.

However this is inaccurate because there was no word like ‘whiteness’ (let alone ‘horseness’) in classical Chinese.

Realizing that classical Chinese does not have nouns and adjectives as separate parts of speech may help explain what’s going on here. To get into the mood for this paradox, we shouldn’t think of a horse as a thing to which the predicate ‘whiteness’ applies. We shouldn’t think of the world as consisting of things <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> and, separately, predicates <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics>, which combine to form assertions <semantics>P(x)<annotation encoding="application/x-tex">P(x)</annotation></semantics>. Instead, both ‘white’ and ‘horse’ are on more of an equal footing.

I like this idea because it suggests that predicate logic arose in the West thanks to peculiarities of Indo-European grammar that aren’t shared by all languages. This could free us up to have some new ideas.

Here’s how the dialog actually goes. I’ll use Angus Graham’s translation because it tries hard not to wash away the peculiar qualities of classical Chinese:

Is it admissible that white horse is not-horse?

S. It is admissible.

O. Why?

S. ‘Horse’ is used to name the shape; ‘white’ is used to name the color. What names the color is not what names the shape. Therefore I say white horse is not horse.

O. If we take horses having color as nonhorse, since there is no colorless horse in the world, can we say there is no horse in the world?

S. Horse obviously has color, which is why there is white horse. Suppose horse had no color, then there would just be horse, and where would you find white horse. The white is not horse. White horse is white and horse combined. Horse and white is horse, therefore I say white horse is non-horse.

(Chad Hansen writes: “Most commentaries have trouble with the sentence before the conclusion in F-8, “horse and white is horse,” since it appears to contradict the sophist’s intended conclusion. But recall the Mohists asserted that ox-horse both is and is not ox.” I’m not sure if that helps me, but anyway….)

O. If it is horse not yet combined with white which you deem horse, and white not yet combined with horse which you deem white, to compound the name ‘white horse’ for horse and white combined together is to give them when combined their names when uncombined, which is inadmissible. Therefore, I say, it is inadmissible that white horse is not horse.

S. ‘White’ does not fix anything as white; that may be left out of account. ‘White horse’ has ‘white’ fixing something as white; what fixes something as white is not ‘white’. ‘Horse’ neither selects nor excludes any colors, and therefore it can be answered with either yellow or black. ‘White horse’ selects some color and excludes others, and the yellow and the black are both excluded on grounds of color; therefore one may answer it only with white horse. What excludes none is not what excludes some. Therefore I say: white horse is not horse.

One possible anachronistic interpretation of the last passage is

The set of white horses is not equal to the set of horses, so “white horse” is not “horse”.

This makes sense, but it seems like a way of saying we can have <semantics>ST<annotation encoding="application/x-tex">S \subseteq T</annotation></semantics> while also <semantics>ST<annotation encoding="application/x-tex">S \ne T</annotation></semantics>. That would be a worthwhile observation around 300 BC — and it would even be worth trying to get people upset about this, back then! But it doesn’t seem very interesting today.

A more interesting interpretation of the overall dialog is given by Chad Hansen [H]. He argues that to understand it, we should think of both ‘white’ and ‘horse’ as mass nouns or ‘kinds of stuff’.

The issue of how two kinds of stuff can be present in the same place at the same time is a bit challenging — we see Plato battling with it in the Parmenides — and in some sense western mathematics deals with it by switching to a different setup, where we have a universe of entities <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> of which predicates <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics> can be asserted. If <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> is a horse and <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics> is ‘being white’, then <semantics>P(x)<annotation encoding="application/x-tex">P(x)</annotation></semantics> says the horse is white.

However, then we get Leibniz’s principle of the ‘indistinguishability of indiscernibles’, which is a way of defining equality. This says that <semantics>x=y<annotation encoding="application/x-tex">x = y</annotation></semantics> if and only if <semantics>P(x)P(y)<annotation encoding="application/x-tex">P(x) \iff P(y)</annotation></semantics> for all predicates <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics>. By this account, an entity really amounts to nothing more than the predicates it satisfies!

This is where equality comes in — but as I said, all of this is seeming like too much of a distraction from my overall goals for this essay right now.

Notes

  • [N] Joseph Needham, Science and Civilisation in China vol. 2: History of Scientific Thought, Cambridge U. Press, Cambridge, 1956, p. 185.

  • [H] Chad Hansen, Mass nouns and “A white horse is not a horse”, Philosophy East and West 26 (1976), 189–209.

by john (baez@math.ucr.edu) at February 25, 2015 04:06 AM

February 24, 2015

Symmetrybreaking - Fermilab/SLAC

Physics in fast-forward

During their first run, experiments at the Large Hadron Collider rediscovered 50 years' worth of physics research in a single month.

In 2010, the brand-spanking-new CMS and ATLAS detectors started taking data for the first time. But the question physicists asked was not, “Where is the Higgs boson?” but rather “Do these things actually work?”

“Each detector is its own prototype,” says UCLA physicist Greg Rakness, run coordinator for the CMS experiment. “We don’t get trial runs with the LHC. As soon as the accelerator fires up, we’re collecting data.”

So LHC physicists searched for a few old friends: previously discovered particles.

“We can’t say we found a new particle unless we find all the old ones first,” says Fermilab senior scientist Dan Green. “Well, you can, but you would be wrong.”

Rediscovering 50 years' worth of particle physics research allowed LHC scientists to calibrate their rookie detectors and appraise their experiments’ reliability.

The CMS collaboration produced this graph using data from the first million LHC particle collisions identified as interesting by the experiment's trigger. It represents the instances in which the detector saw a pair of muons.

Muons are heavier versions of electrons. The LHC can produce muons in its particle collisions. It can also produce heavier particles that decay into muon pairs.

On the x-axis of the graph is the combined mass of two muons that appeared simultaneously in the aftermath of a high-energy LHC collision. On the y-axis is the number of times scientists saw each muon+muon mass combination.

On top of a large and raggedy-looking half-parabola, six sharp peaks emerge.

“Each peak represents a parent particle, which was produced during the collision and then spat out two muons during its decay,” Green says.

When muon pairs appear at a particular mass more often than random chance can explain, scientists can deduce that there must some other process tipping the scale. This is how scientists find new particles and processes—by looking for an imbalance in the data and then teasing out the reason why.

Each of the six peaks on this graph can be traced back to a well-known particle that decays to two muons.

Courtesy of: Dan Green

 

  • The rho [ρ] was discovered in 1961.
  • The J-psi [J/ Ψ] was discovered in 1974 (and earned a Nobel Prize for experimenters at the Massachusetts Institute of Technology and SLAC National Accelerator Laboratory).
  • The upsilon [Y] was discovered in 1977.
  • The Z was discovered in 1983 (and earned a Nobel Prize for experimenters at CERN).

What originally took years of work and multiple experiments to untangle, the CMS and ATLAS collaborations rediscovered after only about a month.

“The LHC is higher energy and produces a lot more data than earlier accelerators,” Green says. “It’s like going from a garden hose to a fire hose. The data comes in amazingly fast.”

But even the LHC has its limitations. On the far-right side, the graph stops looking like a half-parabola and start looking like a series of short, jutting lines.

“It looks chaotic because we just didn’t have enough data for events at higher masses,” Green says. “Eventually, we would expect to see a peak representing the Higgs decaying to two muons popping up at around 125 GeV. But we just hadn’t produced enough high-mass muons to see it yet.”

Over the summer, the CMS and ATLAS detectors will resume taking data—this time with collisions containing 60 percent more energy. Green says he and his colleagues are excited to push the boundaries of this graph to see what lies just out of reach.

 

Like what you see? Sign up for a free subscription to symmetry!

 

by Sarah Charley at February 24, 2015 09:42 PM

Clifford V. Johnson - Asymptotia

Simulated meets Real!
Here's a freshly minted Oscar winner who played a scientist surrounded by... scientists! I'm with fellow physicists Erik Verlinde, Maria Spiropulu, and David Saltzberg at an event last month. Front centre are of course actors Eddie Redmayne (Best Actor winner 2015 for Theory of Everything) and Felicity Jones (Best Actress - nominee) along with the screenwriter of the film, Anthony McCarten. The British Consul-General Chris O'Connor is on the right. (Photo was courtesy of Getty Images.) [...] Click to continue reading this post

by Clifford at February 24, 2015 02:06 AM

February 23, 2015

Sean Carroll - Preposterous Universe

I Wanna Live Forever

If you’re one of those people who look the universe in the eyeball without flinching, choosing to accept uncomfortable truths when they are supported by the implacable judgment of Science, then you’ve probably acknowledged that sitting is bad for you. Like, really bad. If you’re not convinced, the conclusions are available in helpful infographic form; here’s an excerpt.

Sitting-Infographic

And, you know, I sit down an awful lot. Doing science, writing, eating, playing poker — my favorite activities are remarkably sitting-based.

So I’ve finally broken down and done something about it. On the good advice of Carl Zimmer, I’ve augmented my desk at work with a Varidesk on top. The desk itself was formerly used by Richard Feynman, so I wasn’t exactly going to give that up and replace it with a standing desk. But this little gizmo lets me spend most of my time at work on my feet instead of sitting on my butt, while preserving the previous furniture.

IMG_1173

It’s a pretty nifty device, actually. Room enough for my laptop, monitor, keyboard, mouse pad, and the requisite few cups for coffee. Most importantly for a lazybones like me, it doesn’t force you to stand up absolutely all the time; gently pull some handles and the whole thing gently settles down to desktop level, ready for your normal chair-bound routine.

IMG_1174

We’ll see how the whole thing goes. It’s one thing to buy something that allows you to stand while working, it’s another to actually do it. But at least I feel like I’m trying to be healthier. I should go have a sundae to celebrate.

by Sean Carroll at February 23, 2015 09:08 PM

ZapperZ - Physics and Physicists

Which Famous Physicist Should Be Depicted In The Movie Next?
Eddie Redmayne won the Oscar last night for his portrayal of Stephen Hawking in the movie "The Theory of Everything". So this got me into thinking of which famous physicist should be portrayed next in a movie biography. Hollywood won't choose someone who isn't eccentric, famous, or in the news. So that rules out a lot.

I would think that Richard Feynman would make a rather compelling biographical movie. He certainly was a very complex person, and definitely not boring. They could give the movie a title of "Sure You Must Be Joking", or "Six Easy Pieces", or "Shut Up And Calculate", although the latter may not be entirely attributed to Feynman.

Hollywood, I'm available for consultation!

Zz.

by ZapperZ (noreply@blogger.com) at February 23, 2015 08:10 PM

Symmetrybreaking - Fermilab/SLAC

Video: LHC experiments prep for restart

Engineers and technicians have begun to close experiments in preparation for the next run.

The LHC is preparing to restart at almost double the collision energy of its previous run. The new energy will allow physicists to check previously untestable theories and explore new frontiers in particle physics.

When the LHC is on, counter-rotating beams of particles will be made to collide at four interaction points 100 meters underground, around which sit the huge detectors ALICE, ATLAS, CMS and LHCb.

In the video below, engineers and technicians prepare these four detectors to receive the showers of particles that will be created in collisions at energies of 13 trillion electronvolts.

The giant endcaps of the ATLAS detector are back in position and the wheels of the CMS detector are moving it back into its “closed” configuration. The huge red door of the ALICE experiment is closed up ready for restart, and the access door to the LHC tunnel is sealed with concrete blocks.


A version of this article was published by CERN.

 

Like what you see? Sign up for a free subscription to symmetry!

by Cian O&#039;Luanaigh at February 23, 2015 04:50 PM

arXiv blog

Computational Anthropology Reveals How the Most Important People in History Vary by Culture

Data mining Wikipedia people reveals some surprising differences in the way eastern and western cultures identify important figures in history, say computational anthropologists.

 

February 23, 2015 04:18 PM

Tommaso Dorigo - Scientificblogging

New CP-Odd Higgs Boson Results By ATLAS
The paper to read today is one from the ATLAS collaboration at the CERN Large Hadron Collider -my competitors, as I work for the other experiment across the ring, CMS. ATLAS has just produced a new article which describes the search for the CP-odd A boson, a particle which arises in Supersymmetry as well as in more generic extensions of the Standard Model called "two-higgs doublet models". What are these ?

read more

by Tommaso Dorigo at February 23, 2015 03:00 PM

February 21, 2015

Jester - Resonaances

Weekend plot: rare decays of B mesons, once again
This weekend's plot shows the measurement of the branching fractions for neutral B and Bs mesons decays into  muon pairs:
This is not exactly a new thing. LHCb and CMS separately announced evidence for the B0s→μ+μ- decay in summer 2013, and a preliminary combination of their results appeared a few days after. The plot above comes from the recent paper where a more careful combination is performed, though the results change only slightly.

A neutral B meson is a  bound state of an anti-b-quark and a d-quark (B0) or an s-quark (B0s), while for an anti-B meson the quark and the antiquark are interchanged. Their decays to μ+μ- are interesting because they are very suppressed in the standard model. At the parton level, the quark-antiquark pair annihilates into a μ+μ- pair. As for all flavor changing neutral current processes, the lowest order diagrams mediating these decays occur at the 1-loop level. On top of that, there is the helicity suppression by the small muon mass, and the CKM suppression by the small Vts (B0s) or Vtd (B0) matrix elements. Beyond the standard model one or more of these suppression factors may be absent and the contribution could in principle exceed that of the standard model even if the new particles are as heavy as ~100 TeV. We already know this is not the case for the B0s→μ+μ- decay. The measured branching fraction (2.8 ± 0.7)x10^-9  agrees within 1 sigma with the standard model prediction (3.66±0.23)x10^-9. Further reducing the experimental error will be very interesting in view of observed anomalies in some other b-to-s-quark transitions. On the other hand, the room for new physics to show up  is limited,  as the theoretical error may soon become a showstopper.

Situation is a bit different for the B0→μ+μ- decay, where there is still relatively more room for new physics. This process has been less in the spotlight. This is partly due to a theoretical prejudice: in most popular new physics models it is impossible to generate a large effect in this decay without generating a corresponding excess in B0s→μ+μ-. Moreover,  B0→μ+μ- is experimentally more difficult:  the branching fraction predicted by the standard model is (1.06±0.09)x10^-10, which is 30 times smaller than that for  B0s→μ+μ-. In fact, a 3σ evidence for the B0→μ+μ- decay appears only after combining LHCb and CMS data. More interestingly, the measured branching fraction, (3.9±1.4)x10^-10, is some 2 sigma above the standard model value. Of course, this is  most likely a statistical fluke, but nevertheless it will be interesting to see an update once the 13-TeV LHC run collects enough data.

by Jester (noreply@blogger.com) at February 21, 2015 06:18 PM

Jester - Resonaances

Do-or-die year
The year 2015 began as any other year... I mean the hangover situation in particle physics. We have a theory of fundamental interactions - the Standard Model - that we know is certainly not the final  theory because it cannot account for dark matter, matter-antimatter asymmetry, and cosmic inflation. At the same time, the Standard Model perfectly describes any experiment we have performed here on Earth (up to a few outliers that can be explained as statistical fluctuations).  This is puzzling, because some these experiments are in principle sensitive to very heavy particles, sometimes well beyond the reach of the LHC or any future colliders. Theorists cannot offer much help at this point. Until recently,  naturalness was the guiding principle in constructing new theories, but  few have retained confidence in it. No other serious paradigm has appeared to replace naturalness. In short, we know for sure there is new physics beyond the Standard Model, but have absolutely no clue what it is and how big energy is needed to access it.

Yet 2015 is different because it is the year when LHC restarts at 13 TeV energy.  We should expect high-energy collisions some time in summer, and around 10 inverse femtobarns of data by the end of the year. This is the last significant energy jump most of us may experience before retirement, therefore this year is going to be absolutely crucial for the future of particle physics. If, by next Christmas, we don't hear any whispers of anomalies in LHC data, we will have to brace for tough times ahead. With no energy increase in sight, slow experimental progress, and no theoretical hints for a better theory, particle physics as we know it will be in deep merde.

You may protest this is too pessimistic. In principle, new physics may show up at the LHC anytime between this fall and the year 2030 when 3 inverse attobarns of data will have been accumulated. So the hope will not die completely anytime soon. However, the subjective probability of making a discovery will decrease exponentially as time goes on, as you can see in the attached plot. Without a discovery, the mood will soon plummet, resembling something of the late Tevatron, rather than the thrill of pushing the energy frontier that we're experiencing now.

But for now, anything may yet happen. Cross your fingers.

by Jester (noreply@blogger.com) at February 21, 2015 06:18 PM

Jester - Resonaances

Weekend plot: spin-dependent dark matter
This weekend plot is borrowed from a nice recent review on dark matter detection:
It shows experimental limits on the spin-dependent scattering cross section of dark matter on protons. This observable is not where the most spectacular race is happening, but it is important for constraining more exotic models of dark matter. Typically, a scattering cross section in the non-relativistic limit is independent of spin or velocity of the colliding particles. However, there exist reasonable models of dark matter where the low-energy cross section is more complicated. One possibility is that the interaction strength is proportional to the scalar product of spin vectors of a dark matter particle and a nucleon (proton or neutron). This is usually referred to as the spin-dependent scattering, although other kinds of spin-dependent forces that also depend on the relative velocity are possible.

In all existing direct detection experiments, the target contains nuclei rather than single nucleons. Unlike in the spin-independent case, for spin-dependent scattering the cross section is not enhanced by coherent scattering over many nucleons. Instead, the interaction strength is proportional to the expectation values of the proton and neutron spin operators in the nucleus.  One can, very roughly, think of this process as a scattering on an odd unpaired nucleon. For this reason, xenon target experiments such as Xenon100 or LUX are less sensitive to the spin-dependent scattering on protons because xenon nuclei have an even number of protons.  In this case,  experiments that contain fluorine in their target molecules have the best sensitivity. This is the case of the COUPP, Picasso, and SIMPLE experiments, who currently set the strongest limit on the spin-dependent scattering cross section of dark matter on protons. Still, in absolute numbers, the limits are many orders of magnitude weaker than in the spin-independent case, where LUX has crossed the 10^-45 cm^2 line. The IceCube experiment can set stronger limits in some cases by measuring the high-energy neutrino flux from the Sun. But these limits depend on what dark matter annihilates into, therefore they are much more model-dependent than the direct detection limits.

by Jester (noreply@blogger.com) at February 21, 2015 06:18 PM

Clifford V. Johnson - Asymptotia

Pre-Oscar Bash: Hurrah for Science at the Movies?
It is hard to not get caught up each year in the Oscar business if you live in this town and care about film. If you care about film, you're probably just mostly annoyed about the whole thing because the slate of nominations and eventual winners hardly represents the outcome of careful thought about relative merits and so forth. The trick is to forget being annoyed and either hide from the whole thing or embrace it as a fun silly thing that does not mean too much. british_film_oscar_bash_smaller_05 This year since there has been a number of high profile films that help raise awareness of and interest in science and scientists, I have definitely not chosen the "hide away" option. Whatever one thinks of how good or bad "The Theory of Everything", "The Imitation Game" and "Interstellar" might be, I think that is simply silly to ignore the fact that it is a net positive thing that they've got millions of people taking about science and science-related things while out on their movie night. That's a good thing, and as I've been saying for the last several months (see e.g. here and here), good enough reason for people interested in science engagement to be at least broadly supportive of the films, because that'll encourage more to be made, an the more such films are being made, the better the chances are that even better ones get made. This is all a preface to admitting that I went to one of those fancy pre-Oscar parties last night. It was put on by the British Consul-General in Los Angeles (sort of a followup to the one I went to last month mentioned here) in celebration of the British Film industry and the large number of British Oscar [...] Click to continue reading this post

by Clifford at February 21, 2015 06:15 PM

February 20, 2015

Lubos Motl - string vacua and pheno

Barry Kripke wrote a paper on light-cone-quantized string theory
In the S08E15 episode of The Big Bang Theory, Ms Wolowitz died. The characters were sad and Sheldon was the first one who said something touching. I think it was a decent way to deal with the real-world death of Carol Ann Susi who provided Ms Wolowitz with her voice.

The departure of Ms Wolowitz abruptly solved a jealousy-ignited argument between Stewart and Howard revolving around the furniture from the Wolowitz house.

Also, if you missed that, Penny learned that she's been getting tests from Amy who was comparing her intelligence to the intelligence of the chimps. Penny did pretty well, probably more so than Leonard.




But the episode began with the twist described in the title. Barry brought a bottle to Amy because she had previously helped him with a paper that Kripke wrote and that was apparently very successful.




Kripke revealed that the paper was on the wight-cone quantization (I suppose he meant light-cone quantization) of string theory.

It's funny because some of my most well-known papers were about the light-cone quantization (in particular, Matrix theory is a light-cone-quantized description of string/M-theory), and I've been a big fan of this (not terribly widely studied) approach to string theory since 1994 when I began to learn string theory at the technical level. There are no bad ghosts (negative-norm states) or local redundancies in that description (well, except for the \(U(N)\) gauge symmetry if we use the matrix model description) which is "very physical" from a certain perspective.

Throughout the episode, Sheldon was jealous and unhappy that he had left string theory. Penny was trying to help him to "let it go"; this effort turned against her later. Recall that in April 2014, the writers turned Sheldon into a complete idiot who had only been doing string theory because some classmates had been beating him with a string theory textbook and who suddenly decided that he no longer considered string theory a vital branch of the scientific research.

The yesterday's episode fixed that harm to string theory – but it hasn't really fixed the harm done to Sheldon's image. Nothing against Kripke but the path that led him to write papers to string theory seems rather bizarre to me. When he appeared in the sitcom for the first time, I was convinced that we had quite some data that he was just some low-energy, low-brow, and perhaps experimental physicist. Those usually can't write papers on light-cone-quantized string theory.

But the writers have gradually transformed the subdiscipline of physics that Kripke is good at (this was not the first episode in which Kripke looked like a theoretical physicist). Of course, this is a twist that I find rather strange and unlikely but what can we do? Despite his speech disorder and somewhat obnoxious behavior, we should praise a new string theorist. Welcome, Barry Kripke. ;-)

An ad:
Adopt your own Greek for €500!

He will do everything that you don't have time to do:

* sleep until 11 am
* regularly have coffee
* honor siesta after the lunch
* spend evenings by sitting in the bar

You will finally have the time to work from dawn to dusk.

by Luboš Motl (noreply@blogger.com) at February 20, 2015 06:42 PM

ZapperZ - Physics and Physicists

Unfortunate Superfish
I hope this doesn't taint the name "Superfish".

In case you missed it, this week came the news that Lenovo, the Chinese computer company (to whom IBM sold their ThinkPad laptop series) had been installing on some of their computers a rather nasty tracking software called Superfish Visual Discovery.

I would have paid that much attention to stuff like this weren't it for two reasons: (i) I own a rather new Lenovo laptop and (ii) I am familiar with the name "Superfish" but under a different context.

Luckily, after doing a thorough check of my system, I found no sign of this intrusive software. As for the second reason, there is a rather popular software called "Superfish" out of Los Alamos National Lab that we use quite often. It is a Poisson EM solver often used to solve for EM fields in complex geometry/boundary conditions. I'm guessing that they gave it that name because "poisson" is French for "fish", and it really is a super software! :)

It is unfortunate that in the context of computer technology, the name Superfish is "poison".

Zz.

by ZapperZ (noreply@blogger.com) at February 20, 2015 06:37 PM

Georg von Hippel - Life on the lattice

Perspectives and Challenges in Lattice Gauge Theory, Day Five
Today's programme started with a talk by Santanu Mondal on baryons in the sextet gauge model, which is a technicolor-style SU(3) gauge theory with a doublet of technifermions in the sextet (two index symmetric) representation, and a minimal candidate for a technicolor-like model with an IR almost-fixed point. Using staggered fermions, he found that when setting the scale by putting the technipion's decay constant to the value derived from identifying the Higgs vacuum expectation value as the technicondensate, the baryons had masses in excess of 3 TeV, heavy enough to not yet have been discovered by the LHC, but to be within reach of the next run. However, the anomaly cancellation condition when embedding the theory into the Standard Model of the electroweak interactions requires charge assignments such that the lightest technibaryon (which would be a stable particle) would have a fractional electrical charge of 1/2, and while the cosmological relic density can be made small enough to evade detection, the technibaryons produced by the cosmic rays in the Earth's atmosphere should have been able to accumulate (there currently appear to be no specific experimental exclusions for charge-1/2 particles though).

Next was Nilmani Mathur speaking about mixed action simulations using overlap valence quarks on the MILC HISQ ensembles (which include the radiative corrections to the lattice gluon action from the quarks). Tuning the charm quark mass via the kinetic rather than rest mass of charmonium, the right charmonium hyperfine splitting is found, as well as generally correct charmonium spectra. Heavy-quark baryons (up to and including the Ωccc) have also been simulated, with results in good agreement with experimental ones where the latter exist. The mixed-action effects appear to be mild small in mixed-action χPT, and only half as large as those for domain-wall valence fermions on an asqtad sea.

In a brief note, Gunnar Bali encouraged the participants of the workshop to seek out opportunities for Indo-German research collaboration, of which there are still only a limited number of instances.

After the tea break, there were two more theoretical talks, both of them set in the framework of Hamiltonian lattice gauge theory: Indrakshi Raychowdhury presented a loop formulation of SU(2) lattice gauge theory based on the prepotential formalism, where both the gauge links and their conjugate electrical fields are constructed from harmonic oscillator variables living on the sites using the Schwinger construction. By some ingenious rearrangements in terms of "fusion variables", a representation of the perturbative series for Hamiltonian lattice gauge theory purely in terms of integer-valued quantum numbers in a geometric-combinatorial construction was derived.

Lastly, Sreeraj T.P. presented a derivation of an analogy between the Gauss constraint in Hamiltonian lattice gauge theory and the condition of equal "angular impulses" in the SU(2) x SU(2) description of the SO(4) symmetry of the Coulomb potential to derive a description of the Hilbert space of SU(2) lattice gauge theory in terms of hydrogen atom (n,l,m) variables located on the plaquettes subject only to the global constraint of vanishing total angular momentum, from where a variational ansatz for the ground state can be constructed.

The workshop closed with some well-deserved applause for the organisers and all of the supporting technical and administrative staff, who have ensured that this workshop ran very smoothly indeed. Another excellent lunch (I understand that our lunches have been a kind of culinary journey through India, starting out in the north on Monday and ending in Kerala today) concluded the very interesting workshop.

I will keep the small subset of my readers whom it may interest updated about my impressions from an excursion planned for tomorrow and my trip back.

by Georg v. Hippel (noreply@blogger.com) at February 20, 2015 12:02 PM

February 19, 2015

Lubos Motl - string vacua and pheno

A good story on proofs of inevitability of string theory
Natalie Wolchover is one of the best popular physics writers in the world, having written insightful stories especially for the Simons Foundation and the Quanta Magazine (her Bc degree in nonlinear optics from Tufts helps). Yesterday, she added
In Fake Universes, Evidence for String Theory
It is a wonderful article about the history of string theory (Veneziano-related history; thunderstorms by which God unsuccessfully tried to kill Green and Schwarz in Aspen, Colorado, which would postpone the First Superstring Revolution by a century; dualities; AdS/CFT etc.) with a modern focus on the research attempting to prove the uniqueness of string theory.



At least since the 1980s, we were saying that "string theory is the only game in town". This slogan was almost universally understood as a statement about the sociology or comparative literature. If you look at proposals for a quantum theory of gravity, aside from string theory, you won't find any that work.




However, one may adopt a different, bolder, and non-sociological perspective on the slogan. One may actually view it as a proposition in a theorem (or theorems) that could be proved. "The proof" that would settle all these questions isn't available yet but lots of exciting ideas and papers with partial proofs are already out there.

I've been promoting the efforts to prove that string theory is the only game in town for a decade. On this blog, you may look e.g. at Two Roads from \(\NNN=8\) SUGRA to String Theory (2008) arguing that the extra dimensions and stringy and brane-like objects are unavoidable additions to supergravity if you want to preserve consistency.




Such partial proofs are usually limited to "subclasses" of vacua of quantum gravity. However, in these subclasses, they show that "a consistent theory of quantum gravity" and "string theory" are exactly the same thing described by two seemingly different phrases.

Wolchover mentions a pretty large number of recent papers that extended the case supporting the claim that string theory is the only consistent theory of quantum gravity. Her list of preprints includes
Maldacena+Žibojedov 2011
Maldacena+Žibojedov+2 2014
Rangamani+Haehl 2014
Maloney+2 2014
Veneziano+3 2015
Some of them were discussed in my blog post String theory cleverly escapes acausality traps two weeks ago.

She communicated with some of the authors of the papers as well as with Tom Hartman whom I remember as a very smart student at Harvard. ;-)

Wolchover also gave some room to "uninterested, neutral" voices such as Matt Strassler and Steve Carlip; as well as to two full-fledged crackpots, Carlo Rovelli and Lee Smolin. The latter two just deny that mathematics works or that it can teach us anything. They have nothing to say – except for some pure fog to emit.
String-related: Supreme found a newly posted 2-part lecture by Witten, 50 minutes plus 90 minutes.
Skeptics' concerns

The uninterested folks ask the obvious skeptical questions: Can these results be generalized to all of classes of quantum gravity vacua? And can these sketches of proofs be made rigorous?

They're of course legitimate questions – and the physicists working on these ideas are surely trying to be more rigorous and more general than ever before – but these questions are tendentious, too. Even if the proofs of string theory's uniqueness only cover some classes of (unrealistic) vacua of quantum gravity, they still contain some information. Why?

First, I think that it seems rather awkward to believe that in several very different classes of vacua of quantum gravity, string theory is the only consistent description, while in other classes not yet studied, the conclusion will be very different. But of course, it's always possible that your results simply do not generalize. But imagine that you prove that a game is the only game in Boston, Beijing, and San Francisco. Will you think that there are completely different games in Moscow? Well, you will tend to think that the answer is No, I guess. What about Las Vegas? Maybe there are different games there – but it would still be strange that not a single one among these games may be found in the 3 cities above. So a proof in "seemingly random" towns or subsets of the vacua is always nontrivial evidence increasing the odds that the statement holds in general – or at least, that the statement holds in the other random "town" which happens to be relevant for our observations.

(By the way, Wolchover's term "fake Universes" for the "other towns" is perhaps making them sound more funny and unreal than they are. If string theory is right, these other vacua are as genuine as other towns where you just don't happen to live.)

Second, while the proofs that "the consistent theory must be string/M-theory" are not rigorous at this moment (they cannot be because we can't even define string/M-theory in way that a mathematician could call rigorous, and the same comment mostly applies to the phrase "quantum gravity", too), one "subset" of the results is much more rocksolid, and it's the elimination of specific enough candidate alternatives.

What do I mean?

The papers are usually not formulated in this way but I do think that the people who write them do have a very robust understanding why random cheap ideas that not a terribly bright kid could invent in a few minutes, like loop quantum gravity, cannot be a consistent description of any vacuum of quantum gravity that is studied in these papers.

So even if the "positive proof" isn't rigorous at all, the "negative proofs" may be rigorous because these researchers have appreciated many properties that a consistent theory of quantum gravity simply has to have, and it's easy to see that particular classes of alternative theories – pretty much all of them in the literature – simply don't have these properties.

Third, some of the specific reasons that lead the skeptics to their skepticism may very well be demonstrably wrong. For example, Matt Strassler discusses some of the papers above that conclude that a consistent theory of quantum gravity has to have a stringy density of states. And he comments on this result dismissively:
And just finding a stringy density of states — I don’t know if there’s a proof in that. This is just one property.
It's one property but it may be a sufficient property to locate the theory, too. If you focus on perturbative string theory, it seems that all of its vacua may be described in terms of a world sheet CFT. A consistent perturbative string theory (which is classified as a "vacuum or solution of string theory" nonperturbatively, but let's think that they're different theories) is given by a two-dimensional conformal field theory that obeys modular invariance and perhaps a finite number of extra conditions, and that's it.

If you get the stringy density of states, you may see that the density grows more quickly with energy than the density in point-like particle field theories. So you may determine that these "particles" have to have infinitely many internal degrees of freedom. If you assume that these are organized as fields in a world volume, the parameters of the string-like density are enough to determine that there is 1 internal spatial dimension inside the particles – they have to be strings.

Once you know that the particles are strings, you may be able to determine the other defining properties of string theory (such as modular invariance) by a careful analysis of other consistency conditions. Again, I can't immediately show you all of these proofs in a rigorous way but I am pretty much confident that the statements are true, at least morally true. Again, I am able to much more easily prove that sufficiently "particular" or "[constructively] well-defined" alternative theories or deformations of string theory that no longer obey the strict rules of string theory will be inconsistent. They will either violate the conditions determined in the recent papers, or some older conditions known to be essential for the consistency since the 1970s, 1980s, or 1990s.

In the previous paragraph, an argument of mine only talked about the elimination of sufficiently "particular" or "[constructively] well-defined" theories. What about some hypothetical vague theories that are not clearly well-defined, at least not by a constructive set of definitions? What about some "exceptional" solutions to the string-theoretical or quantum-gravitational "bootstrap" conditions?

Indeed, for those hypothetical theories, I am much less certain that they're impossible. It would be very interesting – even for most string theorists – to know some of these "seemingly non-stringy" theories that manage to obey the consistency conditions as well. However, for these "not really well-defined" theories, it is also much less easy to argue that they are not string theory. They could be unusual solutions of string theory. And their belonging to string theory could depend on your exact definition of string theory.

For example, pure \(AdS_3\) gravity on the minimum radius was shown to include a hidden monster group symmetry (by Witten). Is that theory a part of string theory? I think it is even though I don't know what's the right way to get it as a compactification of a critical, \(d=10\) or \(d=26\) string theory (or whether it's right to demand this kind of construction at all). But I actually do think that such a compactification (perhaps bosonic M-theory on the Leech 24-torus) is a possible way to define it. Even if it is not, there may be reasons to call the theory "a part of string/M-theory". The AdS/CFT correspondence works for this AdS vacuum much like it does for many "typical" stringy AdS spaces.

But you may see that there's some uncertainty here. On the other hand, I think that there is no ambiguity about the claim that the \(AdS_3\) gravity with the monster group is not a solution to loop quantum gravity. ;-) (Even though I have actually spent many many hours by trying to connect these sets of ideas as tightly as possible, but that's a story for another blog post.)



Again, I would like to stress that this whole line of research is powerful primarily because it may "immediately" eliminate some (huge) classes of possible alternative theories that are actually hopeless for reasons that used to be overlooked. If you can eliminate a theory by showing it's inconsistent, you simply don't need any real experiments! This is a trivial point that all those anti-string crackpots seem to completely misunderstand or just dishonestly deny.

It's like considering competing theories that also have their ideas about the value of \(d=3\times 3^2-1\). Some theories say that the result is \(d=1917\), others prefer \(d=-1/\pi\). And the string haters scream that without experiments, all these theories with different values of \(d\) are equally unlikely. I apologize but they are not. Even without experiments, it is legitimate to only consider theories which say \(d=26\). Sorry for having used bosonic string theory as my example. ;-) All the theories with other values of \(d\) may simply be killed before you even start!

In some sense, I am doing an experiment when I am eliminating all the wrong theories. What's special about this experiment is that the number of gadgets I have to buy and arrange; and the number of physical moves of my arms that I have to do is zero. It's still an experiment – the simplest one which requires nothing aside from mathematics to be don. But it is totally enough to eliminate all known alternatives to string theory (at least in the classes of vacua described by the papers) and the people who don't understand why this reasoning is perfectly kosher and perfectly scientific are just hopeless imbeciles.

And that's the memo.

by Luboš Motl (noreply@blogger.com) at February 19, 2015 07:27 PM

Georg von Hippel - Life on the lattice

Perspectives and Challenges in Lattice Gauge Theory, Day Four
Today was dedicated to topics and issues related to finite temperature and density. The first speaker of the morning was Prasad Hegde, who talked about the QCD phase diagram. While the general shape of the Columbia plot seems to be fairly well-established, there is now a lot of controversy over the details. For example, the two-flavour chiral limit seems to be well-described by either the O(4) or O(2) universality class, it isn't currently possible to exclude that it might be Z(2), and while the three-flavour transition appears to be known to be Z(2), simulations with staggered and Wilson quarks give disagreeing results for its features. Another topic that gets a lot of attention is the question of U(1)A restoration; of course, U(1)A is broken by the axial anomaly, which arises from the path integral measure and is present at all temperatures, so it cannot be expected to be restored in the same sense that chiral symmetry is, but it might be that as the temperature gets larger, the influence of the anomaly on the Dirac eigenvalue spectrum gets outvoted by the temporal boundary conditions, so that the symmetry violation might disappear from the correlation functions of interest. However, numerical studies using domain-wall fermions suggest that this is not the case. Finally, the equation of state can be obtained from stout or HISQ smearing with very similar results and appears well-described by a hadron resonance gas at low T, and to match reasonably well to perturbation theory at high T.

The next speaker was Saumen Datta speaking on studies of the QCD plasma using lattice correlators. While the short time extent of finite-temperature lattices makes it hard to say much about the spectrum without the use of techniques such as the Maximum Entropy Method, correlators in the spatial directions can be readily used to obtain screening masses. Studies of the spectral function of bottomonium in the Fermilab formalism suggest that the Y(1S) survives up to at least twice the critical temperature.

Sorendu Gupta spoke next about the equation of state in dense QCD. Using the Taylor expansion (which was apparently first invented in the 14th-15th century by the Indian mathematician Madhava) method together with Padé approximants to reconstruct the function from the truncated series, it is found that the statistical errors on the reconstruction blow up as one nears the suspected critical point. This can be understood as a specific instance of the "no-free-lunch theorem", because a direct simulation (were it possible) would suffer from critical slowing down as the critical point is approached, which would likewise lead to large statistical errors from a fixed number of configurations.

The last talk before lunch was an investigation of an alternative formulation of pure gauge theory using auxiliary bosonic fields in an attempt to render the QCD action amenable to a dual description that might allow to avoid the sign problem at finite baryon chemical potential. The alternative formulation appears to describe exactly the same physics as the standard Wilson gauge action at least for SU(2) in 3D, and in 2D and/or in certain limits, its a continuum limit is in fact known to be Yang-Mills theory. However, when fermions are introduced, the dual formulation still suffers from a sign problem, but it is hoped that any trick that might avoid this sign problem would then also avoid the finite-μ one.

After lunch, there were two non-lattice talks. The first one was given by Gautam Mandal, who spoke about thermalisation in integrable models and conformal field theories. In CFTs, it can be shown that for certain initial states, the expectation value of an operator equilibrates to a certain "thermal" expectation value, and a generalisation to integrable models, where the "thermal" density operator includes chemical potentials for all (infinitely many) conserved charges, can also be given.

The last talk of the day was a very lively presentation of the fluid-gravity correspondence by Shiraz Minwalla, who described how gravity in Anti-deSitter space asymptotically goes over to Navier-Stokes hydrodynamics in some sense.

In the evening, the conference banquet took place on the roof terrace of a very nice restaurant serving very good European-inspired cuisine and Indian red wine (also rather nice -- apparently the art of winemaking has recently been adapted to the Indian climate, e.g. the growing season is during the cool season, and this seems to work quite well).

by Georg v. Hippel (noreply@blogger.com) at February 19, 2015 06:32 PM

John Baez - Azimuth

Scholz’s Star

100,000 years ago, some of my ancestors came out of Africa and arrived in the Middle East. 50,000 years ago, some of them reached Asia. But between those dates, about 70,000 years ago, two stars passed through the outer reaches of the Solar System, where icy comets float in dark space!

One was a tiny red dwarf called Scholz’s star. It’s only 90 times as heavy as Jupiter. Right now it’s 20 light years from us, so faint that it was discovered only in 2013, by Ralf-Dieter Scholz—an expert on nearby stars, high-velocity stars, and dwarf stars.

The other was a brown dwarf: a star so small that it doesn’t produce energy by fusion. This one is only 65 times the mass of Jupiter, and it orbits its companion at a distance of 80 AU.

(An AU, or astronomical unit, is the distance between the Earth and the Sun.)

A team of scientists has just computed that while some of my ancestors were making their way to Asia, these stars passed about 0.8 light years from our Sun. That’s not very close. But it’s close enough to penetrate the large cloud of comets surrounding the Sun: the Oort cloud.

They say this event didn’t affect the comets very much. But if it shook some comets loose from the Oort cloud, they would take about 2 million years to get here! So, they won’t arrive for a long time.

At its closest approach, Scholz’s star would have had an apparent magnitude of about 11.4. This is a bit too faint to see, even with binoculars. So, don’t look for it myths and legends!

As usual, the paper that made this discovery is expensive in journals but free on the arXiv:

• Eric E. Mamajek, Scott A. Barenfeld, Valentin D. Ivanov, Alexei Y. Kniazev, Petri Vaisanen, Yuri Beletsky, Henri M. J. Boffin, The closest known flyby of a star to the Solar System.

It must be tough being a scientist named ‘Boffin’, especially in England! Here’s a nice account of how the discovery was made:

• University of Rochester, A close call of 0.8 light years, 16 February 2015.

The brown dwarf companion to Scholz’s star is a ‘class T’ star. What does that mean? It’s pretty interesting. Let’s look at an example just 7 light years from Earth!

Brown dwarfs

 

Thanks to some great new telescopes, astronomers have been learning about weather on brown dwarfs! It may look like this artist’s picture. (It may not.)

Luhman 16 is a pair of brown dwarfs orbiting each other just 7 light years from us. The smaller one, Luhman 16B, is half covered by huge clouds. These clouds are hot—1200 °C—so they’re probably made of sand, iron or salts. Some of them have been seen to disappear! Why? Maybe ‘rain’ is carrying this stuff further down into the star, where it melts.

So, we’re learning more about something cool: the ‘L/T transition’.

Brown dwarfs can’t fuse ordinary hydrogen, but a lot of them fuse the isotope of hydrogen called deuterium that people use in H-bombs—at least until this runs out. The atmosphere of a hot brown dwarf is similar to that of a sunspot: it contains molecular hydrogen, carbon monoxide and water vapor. This is called a class M brown dwarf.

But as they run out of fuel, they cool down. The cooler class L brown dwarfs have clouds! But the even cooler class T brown dwarfs do not. Why not?

This is the mystery we may be starting to understand: the clouds may rain down, with material moving deeper into the star! Luhman 16B is right near the L/T transition, and we seem to be watching how the clouds can disappear as a brown dwarf cools. (Its larger companion, Luhman 16A, is firmly in class L.)

Finally, as brown dwarfs cool below 300 °C, astronomers expect that ice clouds start to form: first water ice, and eventually ammonia ice. These are the class Y brown dwarfs. Wouldn’t that be neat to see? A star with icy clouds!

Could there be life on some of these stars?

Caroline Morley regularly blogs about astronomy. If you want to know more about weather on Luhman 16B, try this:

• Caroline Morley, Swirling, patchy clouds on a teenage brown dwarf, 28 February 2012.

She doesn’t like how people call brown dwarfs “failed stars”. I agree! It’s like calling a horse a “failed giraffe”.

For more, try:

Brown dwarfs, Scholarpedia.


by John Baez at February 19, 2015 05:26 PM