Particle Physics Planet


September 16, 2019

Peter Coles - In the Dark

Eleven Years a Blogging

I received the little graphic above from WordPress yesterday to remind me that today is the 11th Anniversary of my first blog post, on September 16th 2008. If this were a wedding it would be a steel anniversary…

To be precise, the graphic reminded me that that I registered with WordPress on 15th September 2008. I actually wrote my first post on the day I registered but unfortunately I didn’t really know what I was doing on my first day at blogging and I didn’t actually manage to figure out how to publish this earth-shattering piece. It was only after I’d written my second post that I realized that the first one wasn’t actually visible to the general public because I hadn’t pressed the right buttons, so the two appear in the wrong order in my archive. Anyway, that confusion is the reason why I usually take 16th September as this blog’s real anniversary.

I’d like to take this opportunity to send my best wishes, and to thank, everyone who reads this blog, however occasionally. According to the WordPress stats, I’ve got readers from all round the world, including the Vatican!

by telescoper at September 16, 2019 03:39 PM

Peter Coles - In the Dark

Lord, Let Me In The Lifeboat

Yesterday I noticed a now-typical outburst of British mean-spirited xenophobia in that people are cancelling their donations to the Royal National Lifeboat Institution on the grounds that it spends a massive 2% of its budget saving lives abroad rather than in the UK. Or at least claim to be cancelling donations. Judging by the kind of people commenting on Twitter I’d bet than none of them has ever donated anything to anyone in their entire crab-faced existence. The face of `Global Britain’ as represented by the Daily Mail gets more umpleasant by the day.

Anyway, as a regular donor to the RNLI I have this morning increased my contribution and will be wearing my RNLI pin badge in support of the brave men and women who regularly risk their lives to save those in distress at sea.

Incidentally, in case you were wondering, the RNLI also serves Ireland: there are 59 lifeboats based in 45 stations in the Republic as well as Northern Ireland.

Anyway, I don’t want to let all this get anyone down so I’m sharing this piece of music which sprang to mind. Lord Let Me In The Lifeboat was recorded in 1945 for the Blue Note label by a band led by Sidney Bechet and Bunk Johnson. The latter had just come out of retirement courtesy of Sidney Bechet’s brother Leonard, a dentist, who furnished trumpeter Johnson with a new set of false teeth to allow him to resume playing. Not a lot of people know that.

Understandably, Bunk Johnson’s chops were not in great shape on this session but Bechet’s certainly were! When I was a lad I used to spend a bit of time transcribing clarinet solos from old records, and I remember doing this one by Sidney Bechet. The notes in themselves are not hard to play, but few people could generate that heavy vibrato and rich tone!

by telescoper at September 16, 2019 09:33 AM

September 15, 2019

Christian P. Robert - xi'an's og

Le Monde puzzle [#1110]

A low-key sorting problem as Le Monde current mathematical puzzle:

If the numbers from 1 to 67 are randomly permuted and if the sorting algorithm consists in picking a number i with a position higher than its rank i and moving it at the correct i-th position, what is the maximal number of steps to sort this set of integers when the selected integer is chosen optimaly?

As the question does not clearly state what happens to the number j that stood in the i-th position, I made the assumption that the entire sequence from position i to position n is moved one position upwards (rather than having i and j exchanged). In which case my intuition was that moving the smallest moveable number was optimal, resulting in the following R code

sorite<-function(permu){ n=length(permu) p=0 while(max(abs(permu-(1:n)))>0){
    j=min(permu[permu<(1:n)])
    p=p+1
    permu=unique(c(permu[(1:n)<j],j,permu[j:n]))} 
  return(p)}

which takes at most n-1 steps to reorder the sequence. I however checked this insertion sort was indeed the case through a recursive function

resorite<-function(permu){ n=length(permu);p=0 while(max(abs(permu-(1:n)))>0){
    j=cand=permu[permu<(1:n)]
    if (length(cand)==1){
      p=p+1
      permu=unique(c(permu[(1:n)<j],j,permu[j:n]))
    }else{
      sol=n^3
      for (i in cand){
        qermu=unique(c(permu[(1:n)<i],i,permu[i:n]))
        rol=resorite(qermu)
        if (rol<sol)sol=rol}
      p=p+1+sol;break()}} 
  return(p)}

which did confirm my intuition.

by xi'an at September 15, 2019 10:19 PM

Peter Coles - In the Dark

Reasons to stay alive

I saw this message from author Matt Haig on Twitter last weekend and it affected me so much I couldn’t write about it at the time.

Twenty years ago, when he was in his twenties, Matt tried to take his own life. He didn’t succeed, but the attempt left him severely ill as he summarises in that tweet. He wrote about his crisis in his book Reasons To Stay Alive, from which I have borrowed the title of this post.

Why did this message affect me so much? It’s largely because the words he uses to describe his condition also exactly describe what I was like seven years ago when I was admitted to an acute ward in a psychiatric hospital. I wasn’t exactly suicidal, just so exhausted that I didn’t really care what happened next. I was however put on a kind of `suicide watch’, the reason for this being that, apparently, even while sedated, I kept trying to pull the tube out of my arm. I was being fed via a drip because I was ‘Nil by Mouth’ by virtue of uncontrollable vomiting. I guess the doctors thought I was trying to sabotage myself, but I wasn’t. Not consciously anyway. I think it was probably just irritating me. In fact I don’t remember doing it at all, but that period is very much a blur altogether. Anyway, I then found myself in physical restraints so I couldn’t move my arms to stop me doing that.

Eventually I was deemed well enough to move to a general ward and shortly after that I was discharged (with follow-up counselling and medication).

Experiences like that – which I sincerely hope none of you reading this ever have to go through – make you feel very isolated because you are lost inside your own head and body. Knowing that other people go through similar things, and not only survive but prosper, helps a lot. You feel a bit less of an outlier. Of course I’ll never appear on stage at the National Theatre, but although the intervening years haven’t exactly been plain sailing, the last seven have brought far more positives than negatives.

It’s hard to explain why Matt’s message had such a resonance. His experience was clearly far worse than mine, but when I was discharged from hospital the doctors made it very clear just how ill I had been, and that if there was any recurrence I should get help as soon as possible. As well as writing about it on this blog, I did a piece for Time to Change Wales, encouraging people to ask for help if they need it.

Anyway, this brings me to the point of this sermon. Yesterday I received this by email:

It’s from Niteline, an organisation whose volunteers offer students free confidential counselling, and it came with a suggestion (which I will follow) that I should share it with students before and after my lectures. I’m not sure how many students will read this blog, but I thought I would share it here too. If it encourages just one person who is struggling to find someone to talk to then it’s worth it.

by telescoper at September 15, 2019 01:38 PM

Christian P. Robert - xi'an's og

two positions at UBC

A long-time friend at UBC pointed out to me the opening of two tenure-track Assistant Professor positions at the Department of Statistics at the University of British Columbia, Vancouver, with an anticipated start date of July 1, 2020 or January 1, 2021. The deadline for applications is October 18, 2019. Statistics at UBC is an internationally renowned department, in particular (but not restricted to) computational statistics and Bayesian methods and this is a great opportunity to join this department. (Not mentioning the unique location of the campus and the beautiful surroundings of the city of Vancouver!)

by xi'an at September 15, 2019 12:18 PM

Lubos Motl - string vacua and pheno

Dynamical OPE coefficients as a TOE
Towards the universal equations for quantum gravity in all forms

In the 1960s, before string theory was really born, people studied the bootstrap and the S-matrix theory. The basic idea – going back to Werner Heisenberg (but driven by younger folks such as Geoffrey Chew who died in April 2019) – was that the consistency was enough to determine the S-matrix. In such a consistency-determined quantum theory, there would be no clear difference between elementary and composite fields and everything would just fit together.

Veneziano wrote his amplitude in 1968 and a few years later, it became clear that strings explained that amplitude – and the amplitude could have been created in a "constructive" way, just like QCD completed at roughly the same time which was "constructively" made of quark and gluon fields (although most of the smartest people had believed the strong force not to have any underlying "elementary particles" underneath throughout much of the 1960s). A new wave of constructive theories – colorful and stringy generalizations of the gauge theories – prevailed and downgraded bootstrap to a quasi-philosophical semi-dead fantasy.

On top of that, the constructive theory – string theory – has led to progress that made it clear that it has a large number of vacua so the complete uniqueness of the dynamics was an incorrect wishful thinking.



Still, all these vacua of string/M-theory are connected and the theory that unifies them is unique. Since the 1990s, we have understood its perturbative aspects much more than before, uncovered limited nonperturbative definitions for some superselection sectors, but it's true that the perturbative limit of string theory is the most thoroughly understood portion of quantum gravity that we have.



Various string vacua are being constructed in very different ways. We start with type II-style \(N=1\) world sheet dynamics, or with the heterotic \(N=(1,0)\) dynamics, add various GSO projections and corresponding twisted and antiperiodic sectors. Extra orientifold and orbifold decorations may be added, along with D-branes and fluxes. And the hidden dimensions may be treated as some Ricci-flat manifolds – with fluxes etc. – but also as more abstract CFTs (conformal field theories) resembling minimal models such as the Ising model.

The diversity looks too large. It seems that while the belief in the single underlying theory is totally justified by the network of dualities and topological phase transitions etc., the unity isn't explicitly visible. Isn't there some way to show that all these vacua of string/M-theory are solutions to the same conditions?

It's a difficult task (and there is no theorem guaranteeing that the solution exists at all) because the individual constructions are so different – even qualitatively different. The different vacua – or non-perturbative generalizations of world sheet CFTs – have to be solutions to some conditions or "equations". But how can so heavily different "stories" be "solutions" to the same "equations"? Only relatively recently, I decided that to make progress in this plan, one has to "shut up and calculate".

And to calculate, to have a chance of equations and their solutions, one needs to convert the "stories" to "quantitative objects". We need to "quantify the rules defining theories and orbifolds etc.". To do so, we need to write a more general Ansatz that interpolates between all the different theories and vacua that we have in string/M-theory i.e. quantum gravity.

What is the Ansatz that is enough for a full world sheet CFT? Well, it's necessary and almost sufficient to define the spectrum of all local operators and their OPEs (operator product expansions). The latter contain most of the information and are encoded in OPE coefficients \(C_{12}^3\) – closely related to three-point structure constants \(C_{123}\) and \(B_3\), see e.g. these pages for some reminder about the crossing symmetry and the conformal bootstrap.

What you should appreciate is that coefficients such as \(C_{12}^3\) encode most of the information about the world sheet CFT – e.g. a perturbative string vacuum – but they are treated as completely static numbers that define the theory at the very beginning. You can't touch them afterwards; they can't change. It became clear to me that it's exactly this static condition that is incompatible with the desire to unify the string vacua as solutions to the same equations.

To have a chance for a unifying formulation of a theory of everything (TOE), we apparently need to treat all these coefficients such as \(C_{12}^3\) as dynamical ones. "Dynamical" means "the dependence on time". Which time? I think that in the case of \(C_{12}^3\), we need the dependence on the spacetime's time \(x^0\) (and its spatial partners \(x^i\), if you wish), not the world sheet time \(\tau\), because the values of all these coefficients \(C_{12}^3\) carry the "equivalent information" as any more direct specification of the point on the configuration space of the effective spacetime QFT (a point in the configuration space of mainly scalar fields in the spacetime, if you wish).

Most of the degrees of freedom in \(C_{12}^3\) are non-dynamical ones. There is a lot of freedom hidden in the ability to linearly mix the operators into each other; and on top of that, all these coefficients are apparently obliged to obey the crossing symmetry constraints. But there should still exist a quantization of states on the configuration space of these \(C_{12}^3\) coefficients and the resulting space should be equivalent to a configuration space of fields in the target spacetime.

Since the 1995 papers by Kutasov and Martinec, I've been amazed by the observation that "whatever super-general conditions of quantum gravity hold in the spacetime, they must also apply in the world sheet, and vice versa". So I think that there are analogous operators to the local operators in the world sheet CFT but in the spacetime – labeling the "creation operators for all black hole microstates" – and their counterpart of \(C_{12}^3\) tells us about all the "changes of one microstate to another" that you get when a black hole devours another particle (or another black hole microstate). These probably obey some bootstrap equations as well although as far as I know, the research of those is non-existent in the literature and may only be kickstarted after the potential researchers learn about this possibility from me.

I tend to think that the remaining degrees of freedom in \(C_{12}^3\) aren't fully eliminated. They're just very heavy on the world sheet – and responsible for quantum gravity, non-local phenomena on the world sheet that allow the world sheet topology to change.

In the optimal scenario, one may write down the general list of the fields like \(C_{12}^3\) – possibly, three-point functions may fail to be enough and \(n\)-point functions for all positive \(n\) may be needed as well – and there will be some universal conditions. These should have the solutions in terms of the usual consistent, conformal, modular invariant world sheet theories with the state-operator correspondence. The rules could have a direct generalization outside the weakly coupled limit but the solutions should simplify in the weakly coupled limit. Dualities should be manifest – the theories or vacua dual to each other would explicitly correspond to the same values of the degrees of freedom such as \(C_{12}^3\) and the particular "dual descriptions" would be just different strategies to construct these solutions, usually starting with some approximations.

More generally, the term "quantum gravity" sounds a bit more general than "string/M-theory" although they're ultimately equivalent. "Quantum gravity" doesn't have any obvious strings in it to start with – and has just all the black hole microstates and their behavior. It seems clear to me that people need to understand the equivalence between "quantum gravity" and "string theory" better and to do so, they have to rewrite the rules of string theory in the language of a much larger number of degrees of freedom. The word "consistency" before "of quantum gravity" sounds too verbal and qualitative so far – we should have a much clearer translation of this abstract noun to the language of equations.

It seems to me that the number of people in the world who are really intensely thinking on foundational questions of any similar kind or importance is of order one and the ongoing anti-science campaign has the goal to reduce this estimate to a number much smaller than one.

by Luboš Motl (noreply@blogger.com) at September 15, 2019 09:40 AM

September 14, 2019

Christian P. Robert - xi'an's og

the three i’s of poverty

Today I made a “quick” (10h door to door!) round trip visit to Marseille (by train) to take part in the PhD thesis defense (committee) of Edwin Fourrier-Nicolaï, which title was Poverty, inequality and redistribution: an econometric approach. While this was mainly a thesis in economics, meaning defending some theory on inequalities based on East German data, there were Bayesian components in the thesis that justified (to some extent!) my presence in the jury. Especially around mixture estimation by Gibbs sampling. (On which I started working almost exactly 30 years ago, when I joined Paris 6 and met  Gilles Celeux and Jean Diebolt.) One intriguing [for me] question stemmed from this defense, namely the notion of a Bayesian estimation of a three i’s of poverty (TIP) curve. The three i’s stand for incidence, intensity, and inequality, as, introduced in Jenkins and Lambert (1997), this curve measure the average income loss from the poverty level for the 100p% lower incomes, when p varies between 0 and 1. It thus depends on the distribution F of the incomes and when using a mixture distribution its computation requires a numerical cdf inversion to determine the income p-th quantile. A related question is thus on how to define a Bayesian estimate of the TIP curve. Using an average over the values of an MCMC sample does not sound absolutely satisfactory since the upper bound in the integral varies for each realisation of the parameter. The use of another estimate would however require a specific loss function, an issue not discussed in the thesis.

by xi'an at September 14, 2019 10:19 PM

Peter Coles - In the Dark

The Radio, The Universities and The Culture in Ireland

Yesterday at the end of a busy week I finally got round to booking a ticket for next Friday’s Culture Night performance at the National Concert Hall in Dublin. That will be my first concert of the new season and I’m looking forward to it. Among other things, it gives me the chance to persevere with Brahms…

However, later on in the evening yesterday I heard via one of the presenters of RTÉ lyric fm, the radio station on which next week’s concert will be broadcast, that there are plans afoot to close down the channel because of funding difficulties.

Since I first arrived in Ireland nearly two years ago, I searched through the available radio stations for one that I could listen to and it didn’t take me long to settle on RTÉ lyric fm, which has been a regular source of edification, relaxation and stress relief for me. I fear its loss tremendously. I’m not the only one. If you’re on Twitter, take a look at the hashtag #savelyricfm.

I listen to the jovial Marty Whelan in the morning before work, and when I get home in the evenings I enjoy John Kelly’s Mystery Train followed by Bernard Clarke’s The Blue of the Night on weekdays and Ellen Cranitch’s The Purple Vespertine at the weekends. All these programmes have intriguingly eclectic playlists, from classical to jazz and beyond, and presenters who clearly know and love the music. It’s not just a music channel, of course. RTÉ lyric fm covers culture and the arts generally, and is the only channel run by RTÉ – which is meant to be a public service broadcaster – that has this area as its province. It would be a crying shame as well as an abdication of its cultural responsibility if all this were lost to save the paltry amount of money required to keep the station going. I’ll do anything I can to save RTÉ lyric fm. For me it really is one of the very best things about Ireland.

I might add that I stopped watching television many years ago and don’t have a TV set. I’m even less inclined to get one now that I’m in Ireland as the schedules are dominated by the kind of crappy programmes I didn’t watch when I lived in Britain and have even less reason to watch now. The radio, on the other hand, in something I enjoy a lot.

Another item that was doing the rounds last week was the publication of the annual Times Higher Education World Rankings. I’ll save a proper rant about the stupidity and dishonesty of these league tables for another occasion, but one thing that has preoccupied the media here about the results is that Irish universities have done rather badly: Trinity College, for example, has fallen 44 places since last year. While I don’t trust these tables much, together with Ireland’s very poor showing in the recent ERC grant round, they do paint a consistent picture of a higher education system that is struggling with with the consequences of years of chronic underfunding.

(I’ll add that Maynooth University bucked the trend a bit, rising from the band covering 351st-400th place to that covering 301st to 350th place. That means that Maynooth went up by anything from 1 place to 99 places. There can be little doubt who is responsible for this…)

For me these stories are both consequences of the prevailing political culture in Ireland, which is a form of neoliberalism that deems neither culture nor public service nor education to be things of value in themselves and therefore just leaves them to dwindle through lack of care. The only think that matters is the cycle of production and consumption. Culture is irrelevant.

While the Irish economy is booming – at least in terms of GDP growth – the proceeds of this growth go to a relatively small number of rich individuals and multinational corporations while most people don’t see any benefit at all, either in their own salaries, or in investment in the public services. Austerity for everyone except the rich has been the policy for the last decade in Ireland as it has in the United Kingdom and the consequences are plain for all to see, in the arts, in the universities and in the lives of ordinary people.

by telescoper at September 14, 2019 01:12 PM

John Baez - Azimuth

Klein on the Green New Deal

I’m going to try to post more short news items. For example, here’s a new book I haven’t read yet:

• Naomi Klein, On Fire: The (Burning) Case for a Green New Deal, Simon and Schuster, 2019.

I think she’s right when she says this:

I feel confident in saying that a climate-disrupted future is a bleak and an austere future, one capable of turning all our material possessions into rubble or ash with terrifying speed. We can pretend that extending the status quo into the future, unchanged, is one of the options available to us. But that is a fantasy. Change is coming one way or another. Our choice is whether we try to shape that change to the maximum benefit of all or wait passively as the forces of climate disaster, scarcity, and fear of the “other” fundamentally reshape us.

Nonetheless Robert Jensen argues that the book is too “inspiring”, in the sense of unrealistic optimism:

• Robert Jensen, The danger of inspiration: a review of On Fire: The (Burning) Case for a Green New Deal, Resilience, 10 September 2019.

Let me quote him:

On Fire focuses primarily on the climate crisis and the Green New Deal’s vision, which is widely assailed as too radical by the two different kinds of climate-change deniers in the United States today—one that denies the conclusions of climate science and another that denies the implications of that science. The first, based in the Republican Party, is committed to a full-throated defense of our pathological economic system. The second, articulated by the few remaining moderate Republicans and most mainstream Democrats, imagines that market-based tinkering to mitigate the pathology is adequate.

Thankfully, other approaches exist. The most prominent in the United States is the Green New Deal’s call for legislation that recognizes the severity of the ecological crises while advocating for economic equality and social justice. Supporters come from varied backgrounds, but all are happy to critique and modify, or even scrap, capitalism. Avoiding dogmatic slogans or revolutionary rhetoric, Klein writes realistically about moving toward a socialist (or, perhaps, socialist-like) future, using available tools involving “public infrastructure, economic planning, corporate regulation, international trade, consumption, and taxation” to steer out of the existing debacle.

One of the strengths of Klein’s blunt talk about the social and ecological problems in the context of real-world policy proposals is that she speaks of motion forward in a long struggle rather than pretending the Green New Deal is the solution for all our problems. On Fire makes it clear that there are no magic wands to wave, no magic bullets to fire.

The problem is that the Green New Deal does rely on one bit of magical thinking—the techno-optimism that emerges from the modern world’s underlying technological fundamentalism, defined as the faith that the use of evermore advanced technology is always a good thing. Extreme technological fundamentalists argue that any problems caused by the unintended consequences of such technology eventually can be remedied by more technology. (If anyone thinks this definition a caricature, read “An Ecomodernist Manifesto.”)

Klein does not advocate such fundamentalism, but that faith hides just below the surface of the Green New Deal, jumping out in “A Message from the Future with Alexandria Ocasio-Cortez,” which Klein champions in On Fire. Written by U.S. Rep. Ocasio-Cortez (the most prominent legislator advancing the Green New Deal) and Avi Lewis (Klein’s husband and collaborator), the seven-and-a-half minute video elegantly combines political analysis with engaging storytelling and beautiful visuals. But one sentence in that video reveals the fatal flaw of the analysis: “We knew that we needed to save the planet and that we had all the technology to do it [in 2019].”

First, talk of saving the planet is misguided. As many have pointed out in response to that rhetoric, the Earth will continue with or without humans. Charitably, we can interpret that phrase to mean, “reducing the damage that humans do to the ecosphere and creating a livable future for humans.” The problem is, we don’t have all technology to do that, and if we insist that better gadgets can accomplish that, we are guaranteed to fail.

Reasonable people can, and do, disagree about this claim. (For example, “The science is in,” proclaims the Nature Conservancy, and we can have a “future in which catastrophic climate change is kept at bay while we still power our developing world” and “feed 10 billion people.”) But even accepting overly optimistic assessments of renewable energy and energy-saving technologies, we have to face that we don’t have the means to maintain the lifestyle that “A Message from the Future” promises for the United States, let alone the entire world. The problem is not just that the concentration of wealth leads to so much wasteful consumption and wasted resources, but that the infrastructure of our world was built by the dense energy of fossil fuels that renewables cannot replace. Without that dense energy, a smaller human population is going to live in dramatically different fashion.

I don’t know what Klein actually thinks about this, but she does think drastic changes are coming, one way or another.  She writes:

Because while it is true that climate change is a crisis produced by an excess of greenhouse gases in the atmosphere, it is also, in a more profound sense, a crisis produced by an extractive mind-set, by a way of viewing both the natural world and the majority of its inhabitants as resources to use up and then discard. I call it the “gig and dig” economy and firmly believe that we will not emerge from this crisis without a shift in worldview at every level, a transformation to an ethos of care and repair.

Jensen adds:

The domination/subordination dynamic that creates so much suffering within the human family also defines the modern world’s destructive relationship to the larger living world. Throughout the book, Klein presses the importance of telling a new story about all those relationships. Scientific data and policy proposals matter, but they don’t get us far without a story for people to embrace. Klein is right, and On Fire helps us imagine a new story for a human future.

I offer a friendly amendment to the story she is constructing: Our challenge is to highlight not only what we can but also what we cannot accomplish, to build our moral capacity to face a frightening future but continue to fight for what can be achieved, even when we know that won’t be enough.

One story I would tell is of the growing gatherings of people, admittedly small in number today, who take comfort in saying forthrightly what they believe, no matter how painful—people who do not want to suppress their grief, yet do not let their grief overwhelm them.

 

by John Baez at September 14, 2019 08:11 AM

September 13, 2019

Christian P. Robert - xi'an's og

email footprint

While I was wondering (im Salzburg) at the carbon impact of sending emails with an endless cascade of the past history of exchanges and replies, I found this (rather rudimentary) assessment  that, while standard emails had an average impact of 4g, those with long attachments could cost 50g, quoting from Burners-Lee, leading to the fairly astounding figure of an evaluated impact of 1.6 kg a day or more than half a ton per year! Quite amazing when considering that a round flight Paris-Birmingham is producing 80kg. Hence justifying a posteriori my habit of removing earlier emails when replying to them. (It takes little effort to do so, especially in mailers where this feature can be set as the default option.)

 

by xi'an at September 13, 2019 10:19 PM

Emily Lakdawalla - The Planetary Society Blog

Astronomers May Have Found an Interstellar Comet. Here's Why That Matters.
Astrophysicist Karl Battams tells us what we can learn by studying objects from outside our solar system.

September 13, 2019 04:22 PM

Peter Coles - In the Dark

The Open Journal of Astrophysics: Scholastica Webinar and Plan S

Just a quick post to advertise the fact that I’ve been invited by Scholastica to do a webinar (whatever that is) about the Open Journal of Astrophysics, which will involved a short presentation delivered over the interwebs jointly by myself and Fiona Morley (Head of Digital Programmes and Information Systems at Maynooth University Library), followed by a question and answer session. The session will be conducted via Zoom (which is the pretty neat platform we use, e.g., for Euclid teleconference meetings).

Here is the advert:

You can sign up here.

While I’m on the subject(s) of Scholastica and the Open Journal of Astrophysics, I thought I’d add a bit of news about Plan S. Scholastica has been working hard behind the scenes to develop a roadmap that will enable its journals to become compliant with Plan S. The roadmap is here. Three important landmarks on it are:

  • Core machine-readable XML metadata in the JATS standard for all articles
  • Automated Digital Object Identifier (DOI) registration through Crossref
  • Automated metadata, including funding sources, deposited into major indexes and archives including DOAJ and Portico

Currently we do some of these manually for each article, and it’s nice to see that Scholastica is intending to provide these services automatically which will save us (i.e. me) a considerable amount of fiddling about!

 

 

 

by telescoper at September 13, 2019 09:44 AM

September 12, 2019

Christian P. Robert - xi'an's og

open reviews

When looking at a question on X validated, on the expected Metropolis-Hastings ratio being one (not all the time!), I was somewhat bemused at the OP linking to an anonymised paper under review for ICLR, as I thought this was breaching standard confidentiality rules for reviews. Digging a wee bit deeper, I realised this was a paper from the previous ICLR conference, already published both on arXiv and in the 2018 conference proceedings, and that ICLR was actually resorting to an open review policy where both papers and reviews were available and even better where anyone could comment on the paper while it was under review. And after. Which I think is a great idea, the worst possible situation being a poor paper remaining un-discussed. While I am not a big fan of the brutalist approach of many machine-learning conferences, where the restrictive format of both submissions and reviews is essentially preventing in-depth reviews, this feature should be added to statistics journal webpages (until PCIs become the norm).

by xi'an at September 12, 2019 10:19 PM

Lubos Motl - string vacua and pheno

Moore & ladies: high-Hodge vacua are less numerically dominant than thought
There are many interesting new hep-th papers today. The first author of the first paper is Heisenberg, Kallosh and Linde have a post-Planck update on the CMB – a pattern claimed to be compatible with the KKLT, there are 17 new papers omitting cross-listings, but I choose the second paper:
Flux vacua: A voluminous recount
If we overlook the title that tries to please Al Gore if not Hillary Clinton (too late), Miranda Cheng, Greg Moore, and Natalie Paquette (Amsterdam-Rutgers-Caltech) work to avoid an approximation that is often involved while counting the flux vacua – you know, the computations that yield the numbers such as the insanely overcited number of 10500.



In particular, the previous neglected "geometric factor" is \[

\frac{1}{\pi^{m/2}}\int \det ({\mathcal R} + \omega\cdot 1)

\] OK, something like a one-loop measure factor in path integrals. This factor influences some density of the vacua. Does the factor matter?



They decide that sometimes it does, sometimes it does not. More curiously, they find out that this factor tends to be an interestingly behaved function of the topological invariants. It's intensely decreasing towards a minimum, as you increase some topological numbers, and then it starts to increase again.

Curiously enough, the minimum is pretty much reached for the values of the topological numbers that are exactly expected to be dominant in the string compactifications. In this sense, the critical dimensions and typical invariants in string theory conspire to produce the lowest number of vacua that is mathematically possible, at least when this geometric factor is what we look at.

This is a "hope" I have been explicitly articulating many times – that if you actually count the vacua really properly, or perhaps with some probability weighting that has to be there to calculate which of them could have arisen at the beginning, the "most special or simplest" vacua could end up dominating.

They're not quite there but they have some substantial factor that reduces – but not sufficiently reduces – the number of vacua for very large Hodge numbers i.e. in this sense "complicated topologies of the compactification manifolds". I mean large Hodge numbers. Note that large Hodge numbers (which may become comparable to 1,000 for Calabi-Yau threefolds) are really needed to get high estimates of the number of vacua such as that 10500. You need many cycles and many types of fluxes to obtain the high degeneracies.

Wouldn't it be prettier if the Occam-style vacua with the lowest Hodge numbers were the contenders to become the string vacuum describing the world around us? There could still be a single viable candidate. I have believed that the counting itself is insufficient and the Hartle-Hawking-like wave function gives a probabilistic weighting that could pick the simplest one. They have some evidence that previously neglected effects could actually suppress the very number of the vacua with the large Hodge numbers or other signs of "contrivedness".

Clearly, everyone whose world view finely depends on claims about "the large number of string vacua" has the moral duty to study the paper and perhaps to try to go beyond it.

by Luboš Motl (noreply@blogger.com) at September 12, 2019 04:04 AM

September 09, 2019

Jon Butterworth - Life and Physics

Scientific Exile?
This article in the Guardian describes a situation which is already happening. I have personally been in two recent scientific/training network planning meetings in which UK leadership was ruled out as a possibility (by mutual agreement) as too risky in the … Continue reading

by Jon Butterworth at September 09, 2019 12:45 PM

September 08, 2019

Emily Lakdawalla - The Planetary Society Blog

Here's How I'm Celebrating 53 Years of Star Trek
Star Trek Voyager Emergency Medical Hologram and Planetary Society Board member Robert Picardo uncovers rare Star Trek artifacts at The Planetary Society.

September 08, 2019 05:30 AM

September 05, 2019

Axel Maas - Looking Inside the Standard Model

Reflection, self-criticism, and audacity as a scientist
Today, I want to write a bit about me as a scientist, rather than about my research. It is about how I deal with our attitude towards being right.

As I still do particle physics, we are not done with it. Meaning, we have no full understanding. As we try to understand things better, we make progress, and we make both wrong assumptions and actual errors. The latter because we are human, after all. The former because we do not yet know better. Thus, we necessarily know that whatever we do will not be perfect. In fact, especially when we enter unexplored territory, what we do is more likely not the final answer than not. This led to a quite defensive way of how results are presented. In fact, many conclusions of papers read more like an enumeration what all could be wrong with what was written than what has been learned. And because we are not in perfect control of what we are doing, anyone who is trying to twist things in a way they like, they will find a way due to all the cautious presentation. On the other hand, if we would not be so defensive, and act like we think we are right, but we are not - well, this would also be held against us, right?

Thus, as a scientist one is caught in an eternal limbo about actually believing one's own results and thinking that they can only be wrong. If you browse through scientist on, e.g, Twitter, you will see that this is a state which is not easy to endure. This becomes aggravated by a science system which was geared by neoliberalism towards competition and populist movements who need to discredit science to further their own ends, no matter the cost. To deal with both, we need to be audacious, and make our claims bold. At the same time, we know very well that any claims to be right are potentially wrong. Thus enhancing the perpetual cycle of self-doubt on an individual level. On a collective level this means that science gravitates to things which are simple and incremental, as there the chance to being wrong is smaller then when trying to do something more radical or new. Thus, this kind of pressure reduces science from revolutionary to evolutionary, with all the consequences. It also damns us to avoid taking all consequences of our results, because they could be wrong, couldn't they?

In the case of particle physics, this slows us down. One of the reasons, at least in my opinion, why there is no really big vision of how to push forward, is exactly being too afraid of being wrong. We are at a time, where we have too little evidence to do evolutionary steps. But rather than to make the bold step of just go exploring, we try to cover every possible evolutionary direction. Of course, one reason is that because of being in a competitive system, we have no chance of being bold more than once. If we are wrong with this, this will probably create a dead stop for decades. Of course, it other fields of science the consequence can be much more severe. E.g. in climate sciences, this may very well be the difference between extinction of the human species and its survival.

How do I deal with this? Well, I have been far too privileged and in addition was lucky a couple of time. As a consequence, I could weather the consequences to be a bit more revolutionary and bit more audacious than most. However, I also see that if I would not have been, I would probably had an easier career still. But this does not remove my own doubt about my results. After all, what I do has far-reaching consequences. In fact, I am questioning very much conventional wisdom in textbooks, and want to reinterpret the way how the standard model (and beyond) describes the particles of the world we are living in. Once in a while, when I realize what I claim, I can get scared. Other times, I feel empowered by how things seem to fall into place, and I do not see how edges not fit. Thus, I live in my own cycle of doubt.

Is there anything we can do about the nagging self-doubt, the timidity and the feeling of being an imposter? Probably not so much as individuals, except for taking good care of oneself, and working with people with a positive attitude about our common work. Much of the problems are systemic. Some of them could be dealt with by taking the heat of completion out of science, and have a cooperative model. This will only work out, if there is more access to science positions, and more resources to do science. After all, there are right now far too many people wanting a position as a scientist than there are available. No matter what we do, this always creates additional pressure. But even that could be reduced by having controllable career paths, more mentoring, easier transitions out of science, and much more feedback. But this not only requires long-term commitments on behalf of research institutes, but also that scientists themselves acknowledge these problems. I am very happy to see that this consciousness grows, especially with younger people getting into science. Too many scientist I encounter blatantly deny that these problems exist.

However, in the end, also these problems are connected to societal issues at large. The current culture is extremely competitive, and more often than not rewards selfish behavior. Also, there is, both in science and in society, a strong tendency to give those who have already. And such a society shapes also science. It will be necessary that society reshapes itself to a more cooperative model to get a science, which is much more powerful and forward-moving than we have today. On the other hand, existential crises of the world, like the climate crises or the rise of fascism, are also facilitated by a competitive society. And could therefore likely be overcome by having a more cooperative and equal society. Thus, dealing with the big problems will also help solving the problems of scientists today. I think this is worthwhile, and invite any fellow scientist, and anyone, to do so.

by Axel Maas (noreply@blogger.com) at September 05, 2019 02:51 PM

September 04, 2019

John Baez - Azimuth

UN Climate Action Summit

Hello, I’m Christian Williams. I study category theory with John Baez at UC Riverside. I’ve written two posts on Azimuth about promising distributed computing endeavors. I believe in the power of applied theory – that’s why I left my life in Texas just to work with John. But lately I’ve begun to wonder if these great ideas will help the world quickly enough.

I want to discuss the big picture, and John has kindly granted me this platform with such a diverse, intelligent, and caring audience. This will be a learning process. All thoughts are welcome. Thanks for reading.

(Greta Thunberg, coming to help us wake up.)

…..
I am the master of my fate,
      I am the captain of my soul.

It’s important to be positive. Humanity now has a global organization called the United Nations. Just a few years ago, members signed an amazing treaty called The Paris Agreement. The parties and signatories:

… basically everyone.

By ratifying this document, the nations of the world agreed to act to keep global warming below 2C above pre-industrial levels – an unparalleled environmental consensus. (On Azimuth, in 2015.) It’s not mandatory, and to me that’s not the point. Together we formally recognize the crisis and express the intent to turn it around.

Except… we really don’t have much time.

We are consistently finding that the ecological crisis is of a greater magnitude and urgency than we thought. The report that finally slapped me awake is the IPCC 2018, which explains the difference between 2C and 1.5C in terms of total devastation and lives, and states definitively:

We must reduce global carbon emissions by 45% by 2030, and by 100% by 2050 to keep within 1.5C. We must have strong negative emissions into the next century. We must go well beyond our agreement, now.

(Blue is essentially, “we might still have a stable society”.)

So… how is our progress on the agreement? That is complicated, and a whole analysis is yet to be done. Here is the UN progress tracker. Here is an NRDC summary. Some countries are taking significant action, but most are not yet doing enough. Let that sink in.

However, the picture is much deeper than only national. Reform sparks at all levels of society: a US politician wanting to leave the agreement emboldened us to form the vast coalition We Are Still In. There are many initiatives like this, hundreds of millions of people rising to the challenge. A small selection:

City and State Levels
Mayors National Climate Action Agenda, U.S. Climate Alliance
Covenant of Mayors for Climate & Energy
International Levels
Reducing emissions from deforestation and forest degradation (REDD)

RE100, Under2 Coalition (The Climate Group)
Everyone Levels
Fridays for Future, Sunrise Movement, Extinction Rebellion
350.org, Climate Reality

Each of us must face this challenge, in their own way.

…..

Responding to the findings of the IPCC, the UN is meeting in New York on September 23, with even higher ambitions and higher stakes: UN Climate Action Summit 2019. The leaders will not sit around and give pep talks. They are developing plans which will describe how to transform society.

On the national level, we must make concrete, compulsory commitments. If they do not soon then we must demand louder, or take their place. The same week as the summit, there will be a global climate strike. It is crucial that all generations join the youth in these demonstrations.

We must change how the world works. We have reached global awareness, and we have reached an ethical imperative.

Please listen to an inspiring activist share her lucid thoughts.

by christianbwilliams at September 04, 2019 05:47 PM

CERN Bulletin

Elections of employee delegates – Become a member, get involved and vote!

Elections of employee delegates – Become a member, get involved and vote!

The Association's Staff Council is renewed every two years. This year, in October, elections will be held for your employee delegates for the 2020-2021 term. 

The number of positions to be filled is 50:

  • 45 positions to represent the incumbents;
  • 5 positions to represent fellows and associated staff members (Users, Project Associates, among others).

These 50 positions are open to members of personnel who are also members of the CERN Staff Association.

In addition to these 50 positions, there are also 7 pensioners' delegates appointed by GAC-EPA outside this electoral process.

It is essential for the Organization to have a balanced and diversified representation on the Staff Council, including delegates from all departments and units, all professional categories and all opinions.

What is the mandate of the Staff Association?

The mandate of the Association is defined in Article I.1.2 of the Statutes of the CERN Staff Association.

The primary mission of the Association is to "serve  and  defend  the  economic,  social,  professional  and  moral  interests  of  its  members and all CERN  staff, with particular  reference to the observance of the Staff Rules and all the statutory texts arising therefrom, as well as the  improvement of employment,  working, safety and welfare conditions in the widest possible sense".

But the missions of safeguarding the rights and defending the interests of the families of CERN personnel and those of the beneficiaries of the CERN Pension Fund, promoting good relations between CERN members of personnel and other staff

working on the site, participating with the Council of the Organization and the Director-General in the search for and implementation of the means to fulfil the Organization's mission as defined by the Constitutive Convention should also be highlighted.

Article VII 1.01 of the Staff Regulations and Rules (SR&R) further stipulates that "Independently of the hierarchical channels, the relations between the Director-General and the personnel shall be established either on an individual basis or on a collective basis with the Staff Association as intermediary".

As mentioned in our last issue of ECHO, the essential role of the Staff Association as the representative of all the Organization's personnel to the Director-General and Member States is reflected in its considerable and diversified work, with active participation in the various joint committees defined in the Staff Rules and Regulations but also in all working groups and official bodies dealing with staff matters.

What is the role of the staff delegate?

The role of the staff delegate1 is essential because he or she will participate in the meetings of the Staff Council, whose mission includes determining the broad lines of the Association’s policy, supervising its implementation by the Executive Committee, electing the Executive Committee and convening an ordinary or extraordinary General Assembly.  He or she will also be the point of contact and link between the members of personnel and the Staff Association and will have to advise or guide his or her colleagues according to the issues they encounter.

Being a staff delegate also means participating in the work of at least one of the internal committees of the Staff Association on specific topics (pension, social protection, conditions of employment, health and safety, legal issues, drafting articles or dealing with questions of information and training) and thus sharing ideas and proposals, but also being mandated by the Association to participate in the work of official bodies of the Organisation or various working groups. 

But being a representative is first and foremost about serving and defending the personnel and being an active citizen of the Organization by contributing to the revision and writing of its laws (Operational & Administrative Circulars) and processes.

Becoming a delegate is also a personal and professional development

Becoming a delegate is surely also an enrichment that finds its application in his other professional activities:

-              Discover other aspects of the Organization than those previously encountered;

-              Acquire new skills through new roles and activities profitable in all areas of its activities at CERN; and

-              Meet new colleagues.

Who can be a delegate? How to register for the next elections?

All Employed members of personnel (MPE), fellows and Associated members of personnel (MPA), who are also members of the Staff Association, can sign up and apply from September 3 at 8 a.m. until October 11 at 5 p.m (click on this link).

You will then have three weeks, from October 21 to November 11, to choose your delegates. In order to participate in the election of your delegates, you must be a member of the Staff Association.

If this is not yet the case, these elections give you the opportunity to become a member, especially since the membership for the 2019 calendar year is free of charge as from September 1, 2019.

Taking part in the elections of your delegates in the Staff Council means choosing from among the candidates the persons best able to represent you.

It also means participating in a democratic exercise within the Organization.

Announcement of the results and implementation of the new Staff Council

At the end of November, in early December, the Election Commission following the election process will officially announce the election results in the ECHO newspaper.

It is through this announcement that you will be able to identify your delegates in your college; they will always be available to you during their term of office.

Become a member, get involved and vote!

Join and vote, it is the least you can do for your Organization, for your fellow staff members.

Get involved for your Organization, for your Electoral College colleagues, for CERN staff and, more generally, for the CERN community!

Become a delegate!

***

[1] Art. V.1.2 of the Staff Association Statues

 

 

 

September 04, 2019 03:09 PM

CERN Bulletin

Élections des délégués du personnel – Adhérez, engagez-vous et votez !

Le Conseil du personnel de l’Association se renouvelle tous les deux ans. Cette année, en octobre, auront lieu les élections de vos Délégués du personnel pour le mandat 2020-2021.  

Le nombre de postes à pourvoir est de 50 :

  • 45 postes pour représenter les titulaires ;
  • 5 postes pour représenter les boursiers et les membres du personnel associés (Utilisateurs, Associés projets entre autres)

Ces 50 postes sont ouverts aux membres du personnel qui sont aussi membres de l’Association du personnel du CERN.

Outre ces 50 postes, il compte aussi 7 délégués des pensionnés désignés par le Groupement des anciens du CERN (GAC-EPA) en dehors de ce processus électoral.

Il est essentiel pour l’Organisation d’avoir au sein du Conseil du personnel une représentation équilibrée et diversifiée incluant des délégués issus de tous les départements et unités, de toutes les catégories professionnelles et de tous les avis.

Quel est le mandat de l’Association du personnel ?

Le mandat de l’Association est défini à l’Article I.1.2 des Statuts de l’Association du personnel du CERN.

La première mission de l’Association est de « Servir et défendre les intérêts économiques, sociaux, professionnels et moraux de ses membres et de l’ensemble du personnel du CERN en veillant notamment au respect du Statut du personnel et de tous les textes réglementaires qui en découlent, ainsi qu’à l’amélioration des conditions d’emploi, de travail, de sécurité et de bien-être, au sens le plus large du terme ».

Mais doivent aussi être soulignées les missions de : sauvegarde des droits et défense des intérêts des familles du personnel du CERN et ceux des bénéficiaires de la Caisse de pensions du CERN, promotion de bonnes relations entre les membres du personnel du CERN et les autres personnels travaillant sur le site, participation avec le Conseil de l’Organisation et avec le Directeur général, à la recherche et à la mise en œuvre des moyens d’assurer la mission de l’Organisation telle qu’elle est définie par la Convention constitutive.

L’article VII 1.01 des Statut et Règlement du Personnel (SR&R) stipule d’ailleurs qu’ « Indépendamment du système hiérarchique, les relations entre le Directeur général et le personnel s'établissent soit à titre individuel, soit à titre collectif par l'intermédiaire de l'Association du personnel ».

Comme mentionné dans notre dernier numéro ECHO, le rôle essentiel de l’Association du personnel comme représentant de tout le personnel de l’Organisation auprès de la Directrice générale et des États membres, se traduit par un travail considérable et diversifié avec une participation active dans les différents comités paritaires définis dans le Règlement du personnel mais aussi dans l’ensemble des groupes de travail et organes officiels traitant de questions relatives au personnel.

Quel est le rôle du Délégué du personnel ?

Le rôle de délégué du personnel est essentiel car il va participer aux réunions du Conseil du personnel1 qui a pour mission, entre autres, de fixer les grandes lignes de la politique de l’Association, contrôler l’exécution de celle-ci par le Comité exécutif, élire le Comité exécutif et convoquer l’Assemblée générale en session ordinaire ou extraordinaire.  Il va aussi être le point de contact et le lien entre les membres du personnel et l’Association du personnel et devra conseiller ou orienter ses collègues en fonction des problématiques qu’ils rencontrent.

Être délégué du personnel c’est aussi participer aux travaux d’au moins une des Commissions internes de l’Association du personnel sur des thématiques spécifiques (Pension, protection sociale, conditions d’emploi, hygiène et sécurité, questions juridiques, rédaction d’articles ou questions d’information et formation) et ainsi faire part de ses idées et propositions mais aussi, être mandaté par l’Association pour participer aux travaux d’organes officiels de l’Organisation ou divers groupes de travail. 

Mais devenir délégué, c’est d’abord vouloir servir et défendre le personnel et être un citoyen actif de l’Organisation en contribuant à la révision et établissement de ses lois (Circulaires opérationnelles & administratives) et processus.

Devenir délégué, c’est aussi un développement personnel et professionnel

Devenir délégué est sûrement aussi un enrichissement qui trouve son application dans ses autres activités professionnelles :

  • Découvrir d’autres aspects de l’Organisation que ceux précédemment côtoyés ;
  • Acquérir de nouvelles compétences au travers de rôles et activités nouvelles profitables dans tous les pans de ses activités au CERN ; et
  • Connaitre de nouveaux collègues

Qui peut être délégué et comment se présenter aux prochaines élections ?

Tous les membres du personnel titulaires, boursiers et associés, qui sont aussi membres de l’Association du personnel, peuvent s’engager et déposer leur candidature dès le 3 septembre à 8h et ce, jusqu’au 11 octobre à 17h (Cliquez sur ce lien).

Vous aurez ensuite trois semaines, du 21 octobre au 11 novembre, pour élire vos délégués du personnel.

 

 

 

 

 

 

Afin de participer à l’élection de vos représentants, vous devez impérativement être membre de l’Association du personnel.

Si tel n’est pas encore le cas, ces élections vous donnent l’opportunité de devenir membre, d’autant plus que l’adhésion pour l’année civile 2019 est gratuite à compter du 1er septembre 2019.

Prendre part aux élections de vos représentants au Conseil du personnel, c’est choisir parmi les candidat(e)s les personnes les plus à même de vous représenter.

C’est aussi participer à un exercice démocratique au sein de l’Organisation.

Annonce des résultats et mise en place du nouveau Conseil du personnel

Fin novembre, début décembre, la Commission électorale qui suit le processus des élections annoncera officiellement dans le journal ECHO les résultats des élections.

C’est par cette annonce que vous pourrez identifier vos représentants dans votre collège ; ils seront toujours à votre écoute pendant la durée de leur mandat.

Adhérez, engagez-vous et votez !

Adhérez et votez, c’est le moins que vous puissiez faire pour votre Organisation, pour vos collègues membres du personnel.

Engagez-vous pour votre Organisation, pour les collègues de votre collège électoral, pour le personnel du CERN et, plus généralement, pour la communauté CERN !

Devenez délégué(e) !

 

***

[1] Art. V.1.2 des Statuts de l’Association du personnel

September 04, 2019 11:09 AM

September 03, 2019

Emily Lakdawalla - The Planetary Society Blog

The September Equinox 2019 Issue of The Planetary Report Is Out!
A new issue of The Planetary Report brings you our pride in the success of LightSail 2 and our gratitude to our members for making it happen. Plus Venus science from Akatsuki and Venus Express, and the status of planetary defense.

September 03, 2019 03:23 PM

August 31, 2019

Clifford V. Johnson - Asymptotia

Two Days at San Diego Comic-Con 2019

[caption id="attachment_19354" align="aligncenter" width="499"] Avengers cosplayers in the audience of my Friday panel.[/caption]It might surprise you to know just how much science gets into the mix at Comic-Con. This never makes it to the news of course - instead its all stories about people dressing up in costumes, and of course features about big movie and TV announcements. Somewhere inside this legendary pop culture maelstrom there’s something for nearly everyone, and that includes science. Which is as it should be. Here’s a look at two days I spent there. [I took some photos! (All except two here - You can click on any photo to enlarge it.]

Day 1 – Friday

I finalized my schedule rather late, and so wasn’t sure of my hotel needs until it was far too late to find two nights in a decent hotel within walking distance of the San Diego Convention Center — well, not for prices that would fit with a typical scientist’s budget. So, I’m staying in a motel that’s about 20 minutes away from the venue if I jump into a Lyft.

My first meeting is over brunch at the Broken Yolk at 10:30am, with my fellow panellists for the panel at noon, “Entertaining Science: The Real, Fake, and Sometimes Ridiculous Ways Science Is Used in Film and TV”. They are Donna J. Nelson, chemist and science advisor for the TV show Breaking Bad (she has a book about it), Rebecca Thompson, Physicist and author of a new book about the science of Game of Thrones, and our moderator Rick Loverd, the director of the Science and Entertainment Exchange, an organization set up by the National Academy of Sciences. I’m on the panel also as an author (I wrote and drew a non-fiction graphic novel about science called The Dialogues). My book isn’t connected to a TV show, but I’ve worked on many TV shows and movies as a science advisor, and so this rounds out the panel. All our books are from [...] Click to continue reading this post

The post Two Days at San Diego Comic-Con 2019 appeared first on Asymptotia.

by Clifford at August 31, 2019 05:56 AM

August 29, 2019

John Baez - Azimuth

The Binary Octahedral Group

The complex numbers together with infinity form a sphere called
the Riemann sphere. The 6 simplest numbers on this sphere lie at points we could call the north pole, the south pole, the east pole, the west pole, the front pole and the back pole. They’re the corners of an octahedron!

On the Earth, I’d say the “front pole” is where the prime meridian meets the equator at 0°N 0°E. It’s called Null Island, but there’s no island there—just a buoy. Here it is:

Where’s the back pole, the east pole and the west pole? I’ll leave two of these as puzzles, but I discovered that in Singapore I’m fairly close to the east pole:

If you think of the octahedron’s corners as the quaternions \pm i, \pm j, \pm k, you can look for unit quaternions q such that whenever x is one of these corners, so is qxq^{-1}. There are 48 of these! They form a group called the binary octahedral group.

By how we set it up, the binary octahedral group acts as rotational symmetries of the octahedron: any transformation sending x to qxq^{-1} is a rotation. But this group is a double cover of the octahedron’s rotational symmetry group! That is, pairs of elements of the binary octahedral group describe the same rotation of the octahedron.

If we go back and think of the Earth’s 6 poles as points 0, \pm 1,\pm i, \infty on the Riemann sphere instead of \pm i, \pm j, \pm k, we can think of the binary octahedral group as a subgroup of \mathrm{SL}(2,\mathbb{C}), since this acts as conformal transformations of the Riemann sphere!

If we do this, the binary octahedral group is actually a subgroup of \mathrm{SU}(2), the double cover of the rotation group—which is isomorphic to the group of unit quaternions. So it all hangs together.

It’s fun to actualy see the unit quaternions in the binary octahedral group. First we have 8 that form the corners of a cross-polytope (the 4d analogue of an octahedron):

\pm 1, \pm i , \pm j , \pm k

These form a group on their own, called the quaternion group. Then we have 16 that form the corners of a hypercube (the 4d analogue of a cube, also called a tesseract or 4-cube):

\displaystyle{ \frac{\pm 1 \pm i \pm j \pm k}{2} }

These don’t form a group, but if we take them together with the 8 previous ones we get a 24-element subgroup of the unit quaternions called the binary tetrahedral group. They’re also the vertices of a 24-cell, which is yet another highly symmetrical shape in 4 dimensions (a 4-dimensional regular polytope that doesn’t have a 3d analogue).

That accounts for half the quaternions in the binary octahedral group! Here are the other 24:

\displaystyle{  \frac{\pm 1 \pm i}{\sqrt{2}}, \frac{\pm 1 \pm j}{\sqrt{2}}, \frac{\pm 1 \pm k}{\sqrt{2}},  }

\displaystyle{  \frac{\pm i \pm j}{\sqrt{2}}, \frac{\pm j \pm k}{\sqrt{2}}, \frac{\pm k \pm i}{\sqrt{2}} }

These form the vertices of another 24-cell!

The first 24 quaternions, those in the binary tetrahedral group, give rotations that preserve each one of the two tetrahedra that you can fit around an octahedron like this:

while the second 24 switch these tetrahedra.

The 6 elements

\pm i , \pm j , \pm k

describe 180° rotations around the octahedron’s 3 axes, the 16 elements

\displaystyle{   \frac{\pm 1 \pm i \pm j \pm k}{2} }

describe 120° clockwise rotations of the octahedron’s 8 triangles, the 12 elements

\displaystyle{  \frac{\pm 1 \pm i}{\sqrt{2}}, \frac{\pm 1 \pm j}{\sqrt{2}}, \frac{\pm 1 \pm k}{\sqrt{2}} }

describe 90° clockwise rotations holding fixed one of the octahedron’s 6 vertices, and the 12 elements

\displaystyle{  \frac{\pm i \pm j}{\sqrt{2}}, \frac{\pm j \pm k}{\sqrt{2}}, \frac{\pm k \pm i}{\sqrt{2}} }

describe 180° clockwise rotations of the octahedron’s 6 opposite pairs of edges.

Finally, the two elements

\pm 1

do nothing!

So, we can have a lot of fun with the idea that a sphere has 6 poles.

by John Baez at August 29, 2019 01:00 AM

Dmitry Podolsky - NEQNET: Non-equilibrium Phenomena

The DevOps Effect: Big and Beneficial Changes You Can Expect from Adapting DevOps Practices

There are plenty of excellent reasons why any business should implement the DevOps approach within their company. But here, we’re going to focus on the four major things you can expect when you implement DevOps strategies within your company.

Better Tools

One of the things that DevOps brings is the opportunity to use new technology, especially when it comes to tools and systems. Any company can benefit from re-tooling the workforce. For example, implementing the use of a production and operations planning software and automating standard processes to ensure a smoother flow of a process.

This is done more efficiently. Including these two important things in your toolchain will ensure that no work is left undone and that each and every step in the process is completed. Tool efficiency improves work output, which in turn produces value that is reflected in the products and services that reach your customers.

Improved Mindset

Seeing things with fresh eyes is another thing that DevOps is particular about. By improving the ways and methods that have been in place for some time, an organization can assess the efficiency of their value stream mapping.

They have the opportunity to eliminate waste in the stream and improve work dependencies and relationships within and between teams. This results in a new perspective and outlook on how work should and must be done.

DevOps removes the barrier between back and front end teams, allowing them to work together seamlessly and effectively, with better communication and work visibility, to boot. This keeps both ends on the same page and be able to change each end’s approach towards handing off work and providing feedback from either side.

Smoother Workflow

We mentioned value stream and workflow in the above paragraphs; these two elements are crucial to the successful application and adaptation of DevOps practices. Removing wasteful steps and processes, like unnecessary work, in the stream reduces the amount of time in between tasks, handoffs, and lead times.

The result is a higher work turn-in rate, faster delivery rates, and improved product and service quality. It fosters a more ideal work environment where the flow of work is continuous and with fewer constraints throughout the entire process.

Better Internal Communication

tech devices using to communicate

Dividing the line between dev and ops is systematically eradicated as better, and more systematic communication methods are being implemented. Stronger communication paths bridge the gap between dev and ops. More collaborative effort toward creating valuable products and services enhances customer relationships and which customers can benefit from. It is often said that the work output mirrors the internal work systems and communication methods within the company or a large organization.

Putting a premium on internal communication methods provides any business – big or small, clear direction and the ability to ensure that things are flowing at the right pace, in the right order. It also paves the way for easier problem solving as matters of concern can be raised quickly and easily through the appropriate channels.

It’s often been said that deeply ingrained institutional habits are hard to break and often the cause for any business to fail in keeping up with the ever-changing demands of the business and its customers. This is where adapting the DevOps approach will prove beneficial to your business – whether big or small.

The post The DevOps Effect: Big and Beneficial Changes You Can Expect from Adapting DevOps Practices appeared first on None Equilibrium.

by Bertram Mortensen at August 29, 2019 01:00 AM

August 28, 2019

Matt Strassler - Of Particular Significance

The New York Times Remembers A Great Physicist

The untimely and sudden deaths of Steve Gubser and Ann Nelson, two of the United States’ greatest talents in the theoretical physics of particles, fields and strings, has cast a pall over my summer and that of many of my colleagues.

I have not been finding it easy to write a proper memorial post for Ann, who was by turns my teacher, mentor, co-author, and faculty colleague.  I would hope to convey to those who never met her what an extraordinary scientist and person she was, but my spotty memory banks aren’t helping. Eventually I’ll get it done, I’m sure.

(Meanwhile I am afraid I cannot write something similar for Steve, as I really didn’t know him all that well. I hope someone who knew him better will write about his astonishing capabilities and his unique personality, and I’d be more than happy to link to it from here.)

In this context, I’m gratified to see that the New York Times has given Ann a substantive obituary, https://www.nytimes.com/2019/08/26/science/ann-nelson-dies.html, and appearing in the August 28th print edition, I’m told. It contains a striking (but, to those of us who knew her, not surprising) quotation from Howard Georgi.  Georgi is a professor at Harvard who is justifiably famous as the co-inventor, with Nobel-winner Sheldon Glashow, of Grand Unified Theories (in which the electromagnetic, weak nuclear, and strong nuclear force all emerge from a single force.) He describes Ann, his former student, as being able to best him at his own game.

  • “I have had many fabulous students who are better than I am at many things. Ann was the only student I ever had who was better than I am at what I do best, and I learned more from her than she learned from me.”

He’s being a little modest, perhaps. But not much. There’s no question that Ann was an all-star.

And for that reason, I do have to complain about one thing in the Times obituary. It says “Dr. Nelson stood out in the world of physics not only because she was a woman, but also because of her brilliance.”

Really, NYTimes, really?!?

Any scientist who knew Ann would have said this instead: that Professor Nelson stood out in the world of physics for exceptional brilliance — lightning-fast, sharp, creative and careful, in the same league as humanity’s finest thinkers — and for remarkable character — kind, thoughtful, even-keeled, rigorous, funny, quirky, dogged, supportive, generous. Like most of us, Professor Nelson had a gender, too, which was female. There are dozens of female theoretical physicists in the United States; they are a too-small minority, but they aren’t rare. By contrast, a physicist and person like Ann Nelson, of any gender? They are extremely few in number across the entire planet, and they certainly do stand out.

But with that off my chest, I have no other complaints. (Well, admittedly the physics in the obit is rather garbled, but we can get that straight another time.) Mainly I am grateful that the Times gave Ann fitting public recognition, something that she did not actively seek in life. Her death is an enormous loss for theoretical physics, for many theoretical physicists, and of course for many other people. I join all my colleagues in extending my condolences to her husband, our friend and colleague David B. Kaplan, and to the rest of her family.

by Matt Strassler at August 28, 2019 12:31 PM

August 26, 2019

Jon Butterworth - Life and Physics

Being English abroad, 2019
My weekend was mostly spent on the French side of border country, experiencing serial incidents of Englishness. On Saturday we went to a lake and swam. There was a French guy who seemed to be staring at me while I … Continue reading

by Jon Butterworth at August 26, 2019 08:05 PM

John Baez - Azimuth

Civilizational Collapse (Part 4)

This is part 4 of an intermittent yet always enjoyable series:

Part 1: the rise of the ancient Puebloan civilization in the American Southwest from 10,000 BC to 750 AD.

Part 2: the rise and collapse of the ancient Puebloan civilization from 750 AD to 1350 AD.

Part 3: a simplified model of civilizational collapse.

This time let’s look at the collapse of Greek science and resulting loss of knowledge!

The Antikythera mechanism, found undersea in the Mediterranean, dates to somewhere between 200 and 60 BC. It’s a full-fledged analogue computer! It had at least 30 gears and could predict eclipses, even modelling changes in the Moon’s speed as it orbits the Earth.

What Greek knowledge was lost during the Roman takeover? We’ll never really know.

They killed Archimedes and plundered Syracuse in 212 BC. Ptolemy the Fat—”Physcon” —put an end to science in Alexandria in 154 BC with brutal persecutions.

Contrary to myth, Library of Alexandria was not destroyed once and for all in a single huge fire. The sixth head librarian, Aristarchus of Samothrace, fled when Physcon took over. The library was indeed set on fire in the civil war of 48 BC. But it seems to have lasted until 260 AD, when it basically lost its funding.

When the Romans took over, they dumbed things down. In his marvelous book The Forgotten Revolution, quoted below, Lucio Russo explains the evil effects.

Another example: we have the first four books by Apollonius on conic sections—the more elementary ones—but the other three have been lost.

Archimedes figured out the volume and surface area of a sphere, and the area under a parabola, in a letter to Eratosthenes. He used modern ideas like ‘infinitesimals’! The letter was repeatedly copied and made its way into a 10th-century Byzantine parchment manuscript. But this parchment was written over by Christian monks in the 13th century, and only rediscovered in 1906.

There’s no way to tell how much has been permanently lost. So we’ll never know the full heights of Greek science and mathematics. If we hadn’t found one example of an analogue computer in a shipwreck in 1902, we wouldn’t have guessed they could make those!

And we shouldn’t count on our current knowledge lasting forever, either.

Here are some more things to read. Most of all I recommend this book:

• Lucio Rosso, The Forgotten Revolution: How Science Was Born In 300 BC And Why It Had To Be Reborn, Springer, Berlin, 2013. (First chapter.)

Check out the review by Sandro Graffi (who taught me analysis when I was an undergrad at Princeton):

• Sandro Graffi, La Rivoluzione Dimenticata (The Forgotten Revolution), AMS Notices (May 1998), 601–605.

Only in 1998 did scholars get serious about recovering information from the Archimedes palimpsest using ultraviolet, infrared and other imaging techniques! You can now access it online:

The Archimedes Palimpsest Project.

Here’s a good book on the rediscovery and deciphering of the Archimedes palimpsest, and its mathematical meaning:

• Reviel Netz and William Noel, The Archimedes Codex: Revealing the
Secrets of the World’s Greatest Palimpsest
, Hachette, UK, 2011.

Here’s a video:

• William Noel, Revealing the lost codex of Archimedes, TED, May 29, 2012.

Here are 9 videos on recreating the Antikythera mechanism:

Machining the Antikythera mechanism, Clickspring.

The Wikipedia articles are good too:

• Wikipedia, Antikythera mechanism.

• Wikipedia, Archimedes palimpsest.

• Wikipedia, Library of Alexandria.

by John Baez at August 26, 2019 08:10 AM

August 22, 2019

Dmitry Podolsky - NEQNET: Non-equilibrium Phenomena

Types of Email Security Protocol

Communicating with your clients and business partners is among the crucial elements of business operations. Of course, you will not achieve much if your communication lines are broken. Thanks to the Internet, it is not as difficult to keep in touch with clients and advertise your products. One of the leading platforms for communication and advertising is email. Unfortunately, this platform is also one of the most vulnerable to modern cyberattacks.

Local managed IT service companies in Phoenix and other cities will now focus on security protocols for your emails. The protocols are structures designed for the protection of your emails from third-party interference. Your SMTP (simple mail transfer protocol) has no embedded security and is vulnerable to all manner of malware that hackers might send to your company in the forms of attachments on seemingly genuine emails.

Here are your email security protocol alternatives:

TLS and SSL

Receiving an email

Transport layer security (TLS) is the successor of the secure sockets layer (SSL) that was depreciated in 2015. These are application-layer protocols that will standardize communication for end-users. In email security, the security protocols provide a security framework that works in conjunction with the SMTP to secure emails. TLS works by initiating a series of ‘’handshakes’’ with your email server when you receive an email. These are steps the server takes to validate the email’s encryption settings and validate its security before the transmission of the email. TLS, therefore, generates base-level email encryption for your network.

Digital Certificates

These are encryption tools used to secure your emails cryptographically. The certificates allow you to send encrypted emails on a predefined encryption key while encrypting your outgoing mails. You, after all, would not want to be known as the company that sends malware to clients and partners. The public key for your digital certificate is available for those who want to send you encrypted emails while you will decrypt your received emails using a private key.

The SPF (Sender Policy Framework)

This is an authentication protocol specifically designed for the protection of your network against domain spoofing. SPF introduces extra security checks into your email server that will determine whether your incoming messages came from a specified domain or if a person is masking their identity using the domain. The domain, in this case, is a section of the Internet under one name. Most hackers will often hide their domains to avoid being blacklisted or traced when spoofing a malicious mail as a healthy domain.

DKIM (Domain Keys Identified Mail)

Email popup warning window concept

DKIM denotes an anti-tamper procedure, which ensures that your sent emails remain intact before reaching the recipient. It employs digital signatures to verify that emails have been submitted by a particular domain and checks that the domain is authorized to send the email. To this end, DKIM is considered an extension of SPF. It also eases the process of developing domain whitelists and blacklists.

Hackers are all more focused on your email security vulnerabilities nowadays. They know that opening emails is a crucial undertaking in your business since you cannot afford to ignore messages. The above security protocols will give you a guarantee that none of your emails will open you up to cyberattacks.

The post Types of Email Security Protocol appeared first on None Equilibrium.

by Bertram Mortensen at August 22, 2019 06:11 AM

August 21, 2019

Jon Butterworth - Life and Physics

MMR and me. And propaganda.
Originally posted on Life and Physics:
I have a doctorate in physics. My wife has one in chemistry. We have an 11-year-old son, who should have got his MMR jab in 2003 A lot has been written about the MMR…

by Jon Butterworth at August 21, 2019 05:28 PM

August 20, 2019

Dmitry Podolsky - NEQNET: Non-equilibrium Phenomena

Choosing the Right Flow Meter for an Application

Flow meters are used to measure water, gasoline, chemicals, engine oils, dairy, industrial liquids, airflow, and transmission fluids. Getting the right one for a specific application is a must, as all of your money and budget would be put to waste if you choose the wrong one.

These flow meters are essential for data collection, which is important for engineers. There is a process for choosing the right industrial flow meters for water and today, we will discuss what you should remember when choosing a flow meter for your application.

Don’t Just Go for the Popular or Cheap One

You should never get a flow meter just because it is cheap or popular. Most engineers decide based on these factors and they often end up regretting their choices as most often than not, they would have to spend more money than what they originally intended to.

Chances are if the flow meter is cheap, then you have to spend a lot of money on ancillary equipment and expensive maintenance. This means that if you decide to invest in a high-quality flow meter, then you would get to save a lot of cash, as you can use this for years and years to come. Also, some or most of the most popular flow meters are not suitable for your application, so it would be best to do your research first before actually buying the flow meter.

Consider New Flow Technologies

New flow technologies offer new solutions, which is why you should consider looking at the newer ones in the market. Older models such as inline ultrasound flow meters had to be re-calibrated whenever a new type of fluid was introduced to it. This also cannot be used in an application wherein hygiene was important, which means that you would have to spend on another flow meter if this is one of your main concerns.

Flow meters are technical and is influenced by a lot of variables. Every application is unique and needs different types of flow meters to properly work.

Consider the Flow Measurement

Fluids are measured based on two units: volume and mass. Know which one you will be measuring so you can get the right type of flow meter. Yes, different flow meters are used for volume and mass. If you know how to calculate volume from mass and vice versa, then you can get just one flow meter. This would take a lot more time, though, so you would need to know the density and agreed variables.

Know What You’ll be Measuring

As mentioned, there are different categories when it comes to flow meter measurement. Before getting a flow meter, you should know what you’ll be measuring: is it gas, liquid, slurry, or vapor? A lot of flow meters cannot measure gas or slurry, which is why it is important to do your research so you would know what flow meter can measure the units that you are handling.

There are tons of variables that you would need to consider when buying a flow meter, so you should carefully decide as to not waste any time and money.

 

The post Choosing the Right Flow Meter for an Application appeared first on None Equilibrium.

by Bertram Mortensen at August 20, 2019 01:00 AM

August 19, 2019

Dmitry Podolsky - NEQNET: Non-equilibrium Phenomena

5 Marketing Mistakes Franchisors Should Watch Out For

After finalizing your franchise plans and models, packages, and opportunities, the next step is selling your franchise. But as with all marketing activities, selling your franchise can be quite costly and can take up a lot of resources, which is why you’d want to make sure that every marketing activity and resource you invest in has a good chance of attracting potential franchisees. However, it’s important for franchisors to be aware of common marketing mistakes that won’t only end up costing you money but may end up pushing away potential franchisees.

Relying Too Much on Hard Selling

It’s indeed a huge plus to have aggressive franchise brokers and franchise sales agents/team, but you shouldn’t only rely on them. Many potential franchisees don’t want to be “pushed” or “persuaded” to buy a franchise but prefer to have all the information they need so that they, themselves, can decide whether or not to buy your franchise. That said, you should be able to provide this information in your website, and perhaps even in the franchise opportunity brochures and portfolios (both digital and printed copies) that your franchise brokers and sales team can distribute to potential and/or interested franchisees.

Lacking Proof and Feedback

People discussing about business and sales

Even if your business is already well-known and has proven itself to be quite lucrative, you’d have to think like your potential franchisees. Both seasoned businessmen and those who wish to start a business are aware that buying a franchise is a huge decision that requires a lot of money, which is why they’d want to minimize risk as much as possible by having proof that the franchise they’re buying is indeed a safe and lucrative investment. As such, you should be able to provide data showing all the information they need such as ROI and forecasted sales, as well as data and even testimonials/feedback from your successful franchisees.

Not Having A Dedicated Franchising Website

If you’re already opening your business to franchising, chances are, you already have a business website with all the products and information about your business. However, a common mistake for franchisors is having their “franchise page” as just a sub-page on their business website. As mentioned earlier, many potential franchisees prefer to get all the information they need to decide whether or not to buy the franchise, instead of having it “sold” to them. As such, one of the best things you can do when selling your business franchise is having an entirely separate website dedicated for franchising which contains all the information they need, and perhaps even an active online customer service representative that they can chat with if they wish to know more about the franchise opportunities you’re selling.

Unclear or Poorly-Written/Made Reading Materials

Providing information to your potential franchisees is something that can’t be stressed enough. However, it’s vital for you to oversee and ensure that the instructional materials and franchise brochures/portfolios (including those in your website and social media) are done concisely and could be understood easily to minimize any confusion — your potential clients are more likely to hesitate if the information you’re showing isn’t clear or transparent, or are simply poorly-worded.

Neglecting Social Media

Woman using her phone

While you may have invested in print advertisements, active and professional sales agents, and maybe even TV/radio ads, you should give as much focus to online digital marketing, specifically, social media. Many consumers and entrepreneurs search for products, services, and even business and investment opportunities through social media. That said, you should focus on social media marketing for business franchises; this includes content creation and also managing your franchising business’ social media accounts. One should also have a social media account manager to handle all the inquiries on comments and private messages from interested or curious parties.

Conclusion

Opening your business for franchise opportunities is a good sign that it’s grown big enough and has made a name for itself. But in order to boost your chance of selling your franchise, you should definitely watch out for these common franchise marketing mistakes.

The post 5 Marketing Mistakes Franchisors Should Watch Out For appeared first on None Equilibrium.

by Bertram Mortensen at August 19, 2019 06:03 AM

August 18, 2019

ZapperZ - Physics and Physicists

Big Bang Disproved?!
Of course not! But this is still a fun video for you to watch, especially if you are not up to speed on (i) how we know that the universe is expanding and (ii) the current discrepancy in the measurement of the Hubble constant via two different methods.



But unlike politics or social interactions, discrepancies and disagreement in science are actually welcomed and a fundamental aspects of scientific progress. It is how we refine and polish our knowledge into a more accurate form. As Don Lincoln says at the end of the video, scientists love discrepancies. It means that there are more things that we don't know, and more opportunities to learn and discover something new.

Zz.

by ZapperZ (noreply@blogger.com) at August 18, 2019 04:13 PM

August 16, 2019

Dmitry Podolsky - NEQNET: Non-equilibrium Phenomena

Keep the Heat Up in Your Oven

Industrial equipment can be sturdy, but it is not unbreakable. It will encounter problems if you don’t maintain it. This is why you need to maintain it. One of the more important pieces of equipment is your industrial oven. Considering how it blasts incredible amounts of heat into a small space, it is surprising that it doesn’t break down more often. With the following tips, you can ensure that problems with it will be few and far between:

Lubricate the Blower

An essential part of the oven, the blower motor supplies the needed air for the oven. Without oxygen, there is no fire, so keeping it running is important. Lubrication is critical here. Some models don’t need lubrication but need regular cleaning. Regular lubrication every six months ensures that the blower will keep doing its job. Check the manual for how you can lubricate the motor so that you can ensure that there is nothing wrong. With regular cleaning and lubrication, you can prevent sudden breakdowns. These can ruin your oven’s performance and productivity.

Maintain the Airflow

If you want your blower motor to work well, nothing must restrict the airflow to the oven. Do this by positioning the oven so that the blower motor’s air inlets are clear. Placing items around the inlets is a bad idea. Clearing the space around the oven to ensure maximum air consumption is possible.

Use the Right Voltage

Closeup photo of a stainless steel appliances in modern residential kitchen

Industrial ovens consume great amounts of electricity to function well. If they are not supplied with the right power, they will not perform well. For example, if you hook up a 240 VAC machine up to a weaker power supply like 200 VAC, you can expect lower performance. Always plug your machines into the proper power supply for the best results.

Check the Inside

There are several components that you will need to check every few months. One of these is the heating elements. They heat things in the oven, and when they malfunction, you will not get the high temperatures that you need. If they do break, replace them. Do the same for other components like the thermal sensor and wiring.

Know When to Replace

Even the sturdiest items break. If your oven is old, then you will need to replace it. Any model over ten years old will not have the innovations of new models. This is when you should start shopping around. Suppliers like SupaGEEK Designs can offer your company a great new oven that will meet your requirements. You will have to consider whether it is ideal to keep your old oven rather than buy a new one and enjoy the benefits.

Your industrial oven can cause delays down the line. This is bad news for your company’s productivity and bottom line. This is why the tips above are so important. With their help, you can be sure that breakdowns are at a minimum and that your oven performs well every day. The results are better productivity and a bump in profits.

The post Keep the Heat Up in Your Oven appeared first on None Equilibrium.

by Bertram Mortensen at August 16, 2019 02:36 AM

August 15, 2019

Jon Butterworth - Life and Physics

Nature Careers: Working Scientist podcast
I talked to Julie Gould for Nature recently, about the challenges of working on big collaborations, of doing physics in the media spotlight, on why LHC had more impact with the public than LEP, and more. (I also occasionally manage … Continue reading

by Jon Butterworth at August 15, 2019 06:19 PM

John Baez - Azimuth

Carbon Offsets

A friend asks:

A quick question: if somebody wants to donate money to reduce his or her carbon footprint, which org(s) would you recommend that he or she donate to?

Do you have a good answer to this? I don’t want answers that deny the premise. We’re assuming someone wants to donate money to reduce his or her carbon footprint, and choosing an organization based on this. We’re not comparing this against other activities, like cutting personal carbon emissions or voting for politicians who want to cut carbon emissions.

Here’s my best answer so far:

The Gold Standard Foundation is one organization that tackles my friend’s question. See for example:

• Gold Standard, Offset your emissions.

Here they list various ways to offset your carbon emissions, currently with prices between $11 and $18 per tonne.

The Gold Standard Foundation is a non-profit foundation headquartered in Geneva that tries to ensure that carbon credits are real and verifiable and that projects make measurable contributions to sustainable development.

by John Baez at August 15, 2019 05:34 AM

August 14, 2019

ZapperZ - Physics and Physicists

Relativisitic Length Contraction Is Not So Simple To See
OK, I actually had fun reading this article, mainly because it opened up a topic that I was only barely aware of. This Physics World article describes the simple issue of length contraction, but then delves into why OBSERVING this effect, such as with our own eyes, is not so simple.

If the Starship Enterprise dipped into the Earth’s atmosphere at a sub-warp speed, would we see it? And if the craft were visible, would it look like the object we’re familiar with from TV, with its saucer section and two nacelles? Well, if the Enterprise were travelling fast enough, then – bright physicists that we are – we’d expect the craft to experience the length contraction dictated by special relativity.

According to this famous principle, a body moving relative to an observer will appear slightly shorter in the direction the body’s travelling in. Specifically, its observed length will have been reduced by the Lorentz factor (1–v2/c2)1/2, where v is the relative velocity of the moving object and c is the speed of light in a vacuum. However, the Enterprise won’t be seen as shorter despite zipping along so fast. In fact, it will appear to be the same length, but rotated.

You might not have heard of this phenomenon before, but it’s often called the “Terrell effect” or “Terrell rotation”. It’s named after James Terrell – a physicist at the Los Alamos National Laboratory in the US, who first came up with the idea in 1957. The apparent rotation of an object moving near the speed of light is, in essence, a consequence of the time it takes light rays to travel from various points on the moving body to an observer’s eyes.
You can read the rest of the explanation and graphics in the article. Again, this is not to say that your "pole-in-barn" exercise that you did in relativity lessons is not valid. It is just that in that case, you were not asked what you actually SEE with your eyes when that pole is passing through the barn, and that your pole is long and thin, as opposed to an object with a substantial size and width. The notion that such object will be seen with our eyes flat as a pancake is arguably may not be true here.

Zz.

by ZapperZ (noreply@blogger.com) at August 14, 2019 04:17 PM

Lubos Motl - string vacua and pheno

Coincidences, naturalness, and Epstein's death
The circumstances of Jeffrey Epstein's death seem to be a drastic but wonderful metaphor for naturalness in physics: those who say "there is nothing to see here" in the irregularities plaguing Epstein's jail seem to be similar to those who say "there is nothing to see here" when it comes to fine-tuning or unlikely choices of parameters in particle physics.

As far as I can say, a rational person who thinks about these Epstein events proceeds as follows:
  • an invention of rough hypotheses or classes of hypotheses
  • usage of known or almost known facts to adjust the probabilities of each hypothesis
It's called logical or Bayesian inference! That's a pretty much rigorous approach justified by basic probability calculus – which is just a continuous generalization of mathematical logic. The opponents of this method seem to prefer a different Al Gore rhythm:
  • choose the winning explanation at the very beginning, according to some very simple e.g. ideological criteria or according to your own interests; typically, the winning explanation is the most politically correct one
  • rationalize the choice by saying that all other possible explanations are hoaxes, conspiracy theories, "not even wrong" theories that are simultaneously unfalsifiable and already falsified, and by screaming at, accusing, and insulting those who argue that their other choices seem more likely – often those who do some really fine research
Which of the approaches is more promising as a path towards the truth? Which is the more honest one? These are rhetorical questions – of course Bayesian inference is the promising and ethical approach while the other one is a sign of stupidity or dishonesty. I am just listing the "second approach" to emphasize that some people are just dumb or dishonest – while they or others often fail to appreciate this stupidity or dishonesty.



OK, the basic possible explanations of the reported death seem to be the following:
  1. Epstein committed suicide and all the "awkward coincidences" are really just coincidences that don't mean anything
  2. Epstein committed suicide and someone helped to enable this act, perhaps because of compassion
  3. Epstein was killed by somebody and it's accidentally hard to determine who was the killer because the cameras etc. failed to do their job
  4. Epstein was killed by somebody who took care of details and most of these coincidences are issues that the killer had to take care of
  5. Epstein is alive – he was probably transferred somewhere and will be allowed a plastic surgery and new identity
I have ordered the stories in a certain way – perhaps from the most "politically correct" to the most "conspiracy theory-like" explanations. I had to order them in some way. Also, some completely different explanation could be completely missing in my list – but at some level, it should be possible to group the explanations to boxes according to Yes/No answers to well-defined questions which means that there is a semi-reliable way to make sure that you won't miss any option.



OK, I think that there are lots of politically correct, basically brainwashed and brain-dead, people who imagine a similar list, order it similarly, and pick the first choice – the most politically correct choice – because it's what makes them feel good, obedient, and it's right according to them. They may have been trained to think that it's ethical if not morally superior to believe the first explanation according to a similar ordering.

And then there is the rest of us, the rational people who realize that the most politically correct explanation is often false and one should treat the explanations fairly and impartially, regardless of whether they sound politically correct or convenient for certain people etc.

In the absence of special data and circumstances, the rational people among us also favor the "least conspirational" explanation – well, the most likely one. However, it isn't necessarily the "most politically correct" one in general. Also, the fact that we try to favor the "most likely" explanation is a tautology – it's the task we are solving from the beginning.

But in this case, and many others, there are lots of special facts that seem to matter and affect the probabilities. In this case, and quite generally, they just make the "conspiracy-like explanations" more likely. (A much more detailed analysis should be written to clarify which hypotheses are strengthened by which special circumstances.) In this Epstein story, they are e.g. the following:
  1. Epstein was on suicide watch just three weeks ago but he was taken from the suicide watch days before he was found dead
  2. Epstein has previously claimed that someone tried to kill him in jail
  3. the cameras that could watch him were looking the other way for a very long time – a fact that may clearly be counted as a case of malfunctioning camera (and Polymath is just batšit crazy when he claims that a camera looking the other way, away from Epstein, for hours (?) is not malfunctioning)
  4. Epstein's cellmate was transferred hours before Epstein's death (a possible witness)
  5. the cellmate was taken out from a cell that has a bunk bed (double decker) which is probably needed for a suicide claim (but the very presence of a bunk bed increases the probability of the suicide option 1, too)
  6. he should have been checked every 30 minutes but around the death, the protocol was violated for hours
  7. one of the two relevant guards wasn't a corrections officer but a more unrelated employee
  8. he was claimed to hang himself using bed sheets but the sheets should have been made of paper and the bed frame was unmovable while the room was 8-9+ feet high
  9. a new huge batch of documents about the ring was released by court a day before his death
  10. the number of people who had the motive to kill Epstein was huge – and their combined power is even greater because they were usually rich and high-profile people (note that I don't make any claim about whether the potential killer was left-wing or right-wing – people in both camps speculate but the left-wing killers seem more likely because they were more connected with Epstein and apparently more sinful)
And I am pretty much certain that this list is incomplete, even when it comes to coincidences that have really shocked me. I tried to add some hyperlinks (sources) to the list above but there's no objective way to determine what is the "best" source. Most of these things simply look credible. Some of them really look implicitly "proven". If there were a good camera recording of his suicide, we would have probably learned about it, right?

So I think it's just OK to list similar coincidences even without other "sources". In my case, they are a result of my research and careful curation of sources. I am proud of offering occasional investigative stories that are both more accurate and more early than elsewhere. So if someone suggests that I should be just a follower who copies some MSNBC articles, I feel incredibly insulted because TRF is obviously better, more accurate, and more groundbreaking than the MSNBC. If you really disagree with such a claim, then it would be sensible for you to avoid my website altogether, wouldn't it?

At any rate, there is a very large number of "coincidences" that are generally increasing the probability of the more "conspiracy-like" explanations. Everyone who doesn't acknowledge this fact is a brainwashed or brain-dead irrational moron, a stupid sheep that might be used for wool but not for thinking. The event may still turn out to be a suicide and the coincidences may be just coincidences. But even if that is the case, it will still be true that the people who accept this conclusion immediately are either stupid or dishonest – or perhaps even involved in the plan.

A broken clock is correct twice a day. A wrong reasoning may sometimes end up with a conclusion that happens to be right, too. But even when it is so, we can still analyze how the reasoning was made and if it is demonstrably fallacious or stupid, it can be demonstrated that it is fallacious or stupid – and that the person reasoning in this way is analogous to the broken clock.

Now the analogy. You have the people who won't ever acknowledge any arguments involving fine-tuning or naturalness or the preference for theories that just look more solid, less contrived etc. Like in the Epstein case, these people find their winning explanation in advance, i.e. by ignoring all the relevant detailed evidence that may be collected later. And then they just rationalize this explanation and spit on the alternatives and everyone who "dares" to defend them.

So these people may decide that the best theory is a "quantum field theory with the smallest number of component fields" – their form of Occam's razor. Supergravity or string theory "add fields", according to their counting, so they are less compatible with this version of Occam's razor, and therefore they eliminate these theories even though they don't have any negative evidence.

But competent physicists don't think like that. The claim that a "field theory with the smallest number of fields is most likely" is just a hypothesis and there is an extremely strong body of evidence – both anecdotal empirical evidence and theoretical evidence in the form of incomplete but nearly mathematical proofs – that this assumption is incorrect. Competent physicists really know that the relevant realization of Occam's razor is different and when some multiplets (or supermultiplets) of fields are guaranteed to exist by a symmetry principle or another qualitative principle, they cannot be counted as a disadvantage of the theory that makes them unlikely, despite the fact that the number of component fields may grow very high.

So once again, competent physicists are actually doing something that is analogous to the rational people who care about the peculiarities involving Epstein's guards, documents, camera, and cell maintenance. They just work with the evidence in a nontrivial way – with lots of evidence. The rational usage changes the odds of various theories and even classes of theories. In particular, people have learned that theories with greater numbers of component fields implied by powerful enough symmetry principles (or similar principles) seem like the more natural, default, apparently more likely hypothesis than the naive theory with the smallest number of component fields.

Both in the case of particle physics and Epstein's death, there simply exist two groups of people. One of them prefers an impartial treatment of the hypothesis and relentless, rigorous work with the detailed evidence and its ramifications; and the people who just prefer naively simple explanations picked by some stupid – and in generality, clearly incorrect – criteria followed by a repetitive rationalization and frantic but content-free attacks against everyone who disagrees with them.

by Luboš Motl (noreply@blogger.com) at August 14, 2019 03:55 AM

August 13, 2019

Marco Frasca - The Gauge Connection

Where we are now?

Summer conferences passed by, we have more precise data on the Higgs particle and some new results were announced. So far, this particle appears more and more in agreement with the Standard Model expectations without no surprise in view. Several measurements were performed with the full dataset at 140 {\rm fb}^{-1}. Most commentators avoid to tell about this because it does not warrant click-bait anymore. At EPS-HEP 2019 in Ghent (Belgium), the following slide was presented by Hulin Wang on behalf of the ATLAS Collaboration

ZZ decay and higher resonances

There appears to be an excess at 250 GeV and another at 700 GeV but we are talking of about 2 sigma, nothing relevant. Besides, ATLAS keeps on seeing an excess in the vector boson fusion for ZZ decay, again about 2 sigma, but CMS sees nothing, rather they are somewhat on the missing side!

No evidence of supersymmetry whatsoever, neither the multiplet of Higgs nor charged Higgs are seen that could hint to supersymmetry. I would like to remember that some researchers were able to obtain the minimal supersymmetric standard model from string theory and so, this is a diriment aspect of the experimental search. Is the Higgs particle just the first one of an extended sector of electroweak (soft) supersymmetry breaking?

So, why could the slide I just posted be so important? The interesting fact is the factor 2 between the mass of this presumed new resonance and that of the Higgs particle. The Higgs sector of the Standard Model can be removed from it and treated independently. Then, one can solve it exactly and the spectrum is given by an integer multiple of the mass of the Higgs particle. This is exactly the spectrum of a Kaluza-Klein particle and it would represents an indirect proof of the existence of another dimension in space. So, if confirmed, we would move from a desolating scenario with no new (beyond standard model) physics in view to a completely overturned situation! We could send all the critics back to sleep wishing them a better luck for the next tentative.

Back to reality, the slide yields the result for the dataset of 36.1 {\rm fb}^{-1} and no confirmation from CMS has ever arrived. We can just hope that the dreaming scenario takes life.

by mfrasca at August 13, 2019 05:59 PM

August 11, 2019

Lubos Motl - string vacua and pheno

Four Tommaso Dorigo's SUGRA blunders
Almost all the media informed about the new Special Breakthrough Prize in Fundamental Physics (which will be given to the guys during a TV broadcast event on November 3rd; in NASA's Hangar One, Mountain View, CA) – a prize to three founders of supergravity – as if it were any other prize.

The winners are lucky to divide the $3 million and/or they deserve the award which was chosen by a nontrivial process, like in the case of the Nobel Prize or any other prize. Thankfully, in this case, most journalists didn't try to pretend that they know more about supergravity than the committee. The judgements or information about the importance of work in theoretical physics should be left to the experts because these are damn hard things that an average person – and even an average PhD – simply hasn't mastered.

I detected three amazing exceptions. Nature, Prospect Magazine, and Physics World wrote something completely different. The relevant pages of these media have been hijacked by vitriolic, one-dimensional, repetitive, scientifically clueless, deceitful, and self-serving anti-science activists and they tried to sling as much mud on theoretical physics as possible – which seems to be the primary job description of many of these writers and the society seems to enthusiastically fund this harmful parasitism.



It could be surprising, especially in the case of Nature and Physics World, because under normal circumstances, you would expect Nature and Physics World to be more expert-oriented and closer to the "scientific establishment". But the evolution of the media has produced the opposite outcome. The media that should be close to the scientific establishment are actually almost completely controlled by the self-anointed Messiahs – another branch of all those SJWs who want to destroy the civilized world as we have known it for centuries.

It's ironic but if you look at the reasons, it's logical. It has analogous reasons as the fact that the "inner cities" typically become the ghettos or homes to poor demographic groups – while the productive parts of the society typically have to move to more generic and less "central" suburbs. Similarly, the richest Western European countries are those that seem to be more likely to lose their civilized status very soon. What is the reason? Well, the most special and prosperous places – the inner cities or the rich Western countries – are those that also maximally attract the people who are destined to ruin them.

That's why the "most pro-science journals", inner cities, and wealthiest Western countries putrefy well before others.



Sadly, experimental particle physicist and blogger Tommaso Dorigo has partly joined these anti-civilization warriors. He wrote
My Take On The Breakthrough Prizes
where he repeats several deep misconceptions of the scientifically illiterate public. First, he recommended the three winners a particular way to spend the money. But Tommaso is no longer capable of even doing jokes properly, so let me fix his failed attempt. He advised
  • Ferrara to buy a new Ferrari
  • van Nieuwenhuizen to buy a newer housing in Malibu
  • and a new van for Dan Freedman for his bikes so that may become a truly freed man
OK, Dorigo failed in humor as well – now the more serious things. Dorigo says that it's good news that a rich guy named Milner has randomly decided to pay money for a failed theoretical idea named supergravity. Such a statement is wrong at every discernible level.

First, Dorigo completely misunderstood who picks the winners.

Future winners of the Breakthrough Prize in Fundamental Physics must first be nominated. I know everything about the process of nomination – because I am a nominator. But more importantly, Dorigo failed to read even the most elementary press release. If he had read it, he would know that
A Special Breakthrough Prize in Fundamental Physics can be awarded by the Selection Committee at any time, and in addition to the regular Breakthrough Prize awarded through the ordinary annual nomination process. Unlike the annual Breakthrough Prize in Fundamental Physics, the Special Prize is not limited to recent discoveries.
The quote above says that it is the Selection Committee that decides to grant this special prize – and it can do so at any moment. Is the committee composed of Milner? Or Milner and Zuckerberg? Not at all. Just do a simple Google search and you will find the composition of the Selection Committee. You will find out that the committee consists of the winners of the full-sized Breakthrough Prize in Fundamental Physics – the page contains names of 28 men alphabetically sorted from Arkani-Hamed to Witten (the list of men is surely open to hypothetical women as well). There is no Milner or Zuckerberg on the committee.

(After the SUGRA update, the list will include 4 former co-authors of mine. So I should also win the prize by default, without the needless bureaucracy.)

So you can see, the collection of the winners so far does exactly the same thing during their meetings as members of the Arista that Feynman was once admitted to: to choose who else is worthy to join the wonderful club of ours! ;-) Feynman didn't like it – because he didn't like any honors or the related pride about the status – but if you look at it rationally, you will agree that it's the "least bad" way of choosing new winners.

I find it puzzling that despite Dorigo's (and similar people's) obsession with the money, awards, and all the sociological garbage, he was incapable of figuring out whether the new winners are picked by Milner or by top physicists. It's the latter, Tommaso. You got another failing grade.

The main failing grade is given for the ludicrous comments about the "failed supergravity", however.

Well, to be sure that his dumb readers won't miss it, he wrote that supergravity was a "failed theory" not once but thrice:
I'll admit, I wanted to rather title this post "Billionaire Awards Prizes To Failed Theories", just for the sake of being flippant. [...]

It is a sad story that SUGRA never got a confirmation by experiment to this day, so that it remains a brilliant, failed idea. [...]

(SUGRA is, to this day, only a beautiful, failed theory)
Sorry, Tommaso, but just like numerous generic crackpots who tightly fill assorted cesspools on the Internet, you completely misunderstand how the scientific method works. A theory cannot become "failed" for its not having received an experimental proof yet.

On the contrary, the decisions about the validity of scientific theories are all about the falsification. For a scientific theory or hypothesis to become failed, one has to falsify it – i.e. prove that it is wrong. The absence of a proof in one way or another isn't enough to settle the status of a theory.

Instead, a theory or hypothesis must be in principle falsifiable – which SUGRA is – and once it's discovered, defined, or formulated, it becomes provisionally viable or temporarily valid up to the moment when it's falsified. And that's exactly the current status of SUGRA: it is provisionally viable or temporarily valid.

A physicist must decide whether the Einsteinian general relativity with or without the local supersymmetry – GR or SUGRA – seems like the more likely long-distance limit of the effective field theories describing Nature (in both cases, GR or SUGRA must be coupled to extra matter). But the actual experts who study these matters simply find SUGRA to be more likely for advanced reasons (realistic string vacua seem to need SUSY, naturalness, and others) – so SUGRA is the default expectation that will be considered provisionally valid up to the moment when it's ruled out.

In a typical case of falsification, an old theory is ruled out simultaneously with some positive evidence supporting an alternative, usually newer, theory.

But even if you adopted some perspective or counting in which SUGRA is not the default expectation about the relevant gravitational local symmetries in Nature, supergravity is still found in 176,000 papers according to the Google Scholar. It's clearly a theory that has greatly influenced physics according to the physicists. Of course the sane science prizes should exhibit some positive correlation with the expert literature. A layman may claim to know more than the theoretical physicists but it's unwise.

Everyone who writes that SUGRA is a "failed idea" is just a scientifically illiterate populist writer who clearly has nothing to do with good science of the 21st century – and whose behavior is partly driven by the certainty that he or she could never be considered as a possible winner of an award that isn't completely rigged. Sadly, Tommaso Dorigo belongs to this set. He may misunderstand why good physicists consider SUGRA to be the "default expectation" – that would be just ignorance, an innocent fact that Dorigo has no chance to make it to the list from Arkani-Hamed to Witten.

However, he is a pompous fool because he also brags about this ignorance. He boasts how wonderfully perfumed the cesspool where he belongs is.

Egalitarianism

If you're not following the failing grades, Dorigo has gotten three of them so far: for the inability to convey good jokes if he tries; for the misunderstanding of the decisions that pick the new winners; and for the misunderstanding what you need to make a theory "failed" in science. He deserves the fourth failing grade for the comments at the end of his text. He tried to emulate my "memos" but his actual memo – in an article about supergravity! – is that the inequality in the world is the principal cancer that must be cured.

Holy cow. First of all, such totally ideological comments are out of place in an article pretending to be about supergravity – but if he deserved a passing grade, he would have written that the real cancer is egalitarianism, Marxism, and especially its currently active mutation, neo-Marxism. This is the disease of mankind that all decent people are trying to cure right now!

by Luboš Motl (noreply@blogger.com) at August 11, 2019 12:34 PM

Jon Butterworth - Life and Physics

Space Shed at Latitude
I did an interview with Jon Spooner, Director of Human Space Flight at the Unlimited Space Agency at Latitude 2018. It is now available as a podcast, which you can listen to here (Series 1, Episode 3). It is intended to … Continue reading

by Jon Butterworth at August 11, 2019 07:25 AM

August 08, 2019

ZapperZ - Physics and Physicists

RIP J. Robert Shrieffer
I'm sad to hear the passing of a giant in our field, and certainly in the field of Condensed Matter Physics. Nobel Laureate J. Robert Schrieffer has passed away at the age of 88. He is the "S" in BCS theory of superconductivity, one of the most monumental theories of the last century, and one of the most cited. So "complete" was the theory that, by early 1986, many people thought that the field of superconductivity has been fully "solved", and that nothing new can come out of it. Of course, that got completely changed after that.

Unfortunately, I wasn't aware of his predicament during the last years of Schrieffer's life. I certainly was not aware that he was incarcerated for a while.

Late in life, Dr. Schrieffer’s love of fast cars ended in tragedy. In September 2004, he was driving from San Francisco to Santa Barbara, Calif., when his car, traveling at more than 100 miles per hour, slammed into a van, killing a man and injuring seven other people.

Dr. Schrieffer, whose Florida driver’s license was suspended, pleaded no contest to felony vehicular manslaughter and apologized to the victims and their families. He was sentenced to two years in prison and released after serving one year.

Florida State placed Dr. Schrieffer on leave after the incident, and he retired in 2006.

I've met him only once while I was a graduate student, and he was already at Florida State/NHML at that time. His book and Michael Tinkham's were the two that I used when I decided to go into superconductivity.

Leon Cooper is the only surviving members left of the BCS trio.

Zz.

by ZapperZ (noreply@blogger.com) at August 08, 2019 07:59 PM

August 07, 2019

Lubos Motl - string vacua and pheno

Andy Strominger becomes lead cheerleader at Greene's festival
Carlo Rubbia demands particle physicists to be courageous and build the damn muon collider, a compact Higgs factory.
A few days ago, the World Science Festival of Brian Greene posted a 90-minute video with interviews about the state of fundamental physics:



Bill Zajc sent it to me and I finally had the time and energy to watch it, at a doubled speed. At the beginning, four minutes of Greene and visual tricks – similar to those from his PBS TV shows – are shown. I actually think that some of the tricks are new and even cooler than they used to be. I really liked the segment where Greene was grabbing and magnifying the molecules and taking the atoms, nuclei, and strings out of them. The illustrations of the microscopic building blocks had to be created to match the motion of Greene's hands.



Greene had three guests whom he interviewed separately, in 30-minute segments: Marcelo Gleiser who spoke like a historian and philosopher; Michael Dine who covered the search for supersymmetry etc.; and Andrew Strominger who amusingly discussed the exciting quantum gravity twists of the reductionist stringy program.



I know Dine and Strominger very well – not only as co-authors of papers. I have never met Gleiser. Some Brazilian commenters mention he is unknown in Brazil but he should be famous, partly because he is a DILF – I suppose it meant a Daddy [He or She] would Like to Fudge. Gleiser and Greene discuss the history of unification and a model of expanding knowledge (an island) which also expands the ignorance (the boundary between the known island and the unknown sea). I thought that David Gross insisted that it was his metaphor but OK.

Michael Dine has been a phenomenologist but he bragged that he was one of the few who were cautiously saying that SUSY may remain undiscovered after a few years ;-). I don't really think that this skepticism was rare (my 2007 predictions said that the probability for the LHC SUSY discovery was 50% and I think it agreed with what people were saying).

The real issue is that when you're skeptical about discoveries around the corner, you don't write papers about it – because you really don't think that you have anything interesting to write about. That's why the literature about this topic is naturally dominated by the people who did expect speedy discoveries of SUSY. But the actual percentage of the HEP people who expected speedy discoveries of BSM physics wasn't much higher than 1/2 and maybe it was lower than 1/2. The skeptics avoided writing useless and unmotivated papers and I think it's the right thing to do. One must just be careful not to misinterpret the composition of the literature as the quantification of some collective belief – but it's the interpreter's fault if this invalid interpretation of the composition of the papers is made.

OK, so Greene said that 90% of Dine's work may be wrong or useless and Dine roughly agreed.

Andy Strominger (whose 94-year-old father just got a new 5-year early career grant) pumped quite a different level of excitement into the festival. The research turned out to be much more exciting than what people expected. People expected some reductionist walk towards shorter distances and higher energies, they would finally find the final stringy constituents, strings, and that would be the end of it. Physics departments would shut their doors and Brian Greene would pour champagne on his festivals, or something like that, Andy said. ;-)

What happened was something else. People found the black holes, their puzzling quantum gravity behavior, holography, connections with superconductors and lots of other things. This is better than just "some final particles inside quarks". I agree with that. Strominger's metaphor involves Columbus. You can see that Andy is not quite the extreme progressive left – because those wouldn't dare to speak about the imperialist villain Columbus without trashing him.

OK, Columbus promised to get to China. Instead, he discovered America. Some of his sponsors were disappointed that he only discovered some America and not China. These days, people may be happy because America is better than China. Andy forgot to describe the future evaluation. Maybe, in a century or so from now, people will be disappointed again that Columbus found just America and not China because China may be better. But it could be good news for Andy (he's at least impartial) – who has spent quite some time in China and who actually speaks Chinese.

While Dine predicted that the field would shrink, Strominger's description implicitly says that the field is growing. I agree with Andy – but a part of the difference is that Andy and Michael don't have quite the same field. Dine is a phenomenologist and the contact with the experiment is almost a defining condition of his field or subfield. Strominger is a formal theorist so he can do great advances independently of experimental tests.

It would be a surprise for the people in 1985 if they saw how the string theory research has evolved. In 1985, Witten expected that in a few weeks, the right compactification would be found and that would be the end of physics. Instead, all the complex branches of knowledge were found. With hindsight, it's almost obvious that what has happened had to happen. The one-direction reductionist approach works well but only up to the Planck scale.

There are no sub-Planckian, physically distinguishable distances. So it should have been clear, even in advance, that the one-direction journey had to become impossible or multi-directional once the true phenomena near the fundamental scale are probed. And that's what happened. Most of the stringy quantum gravity research makes it very clear that it's not a part of quantum field theory that respects the one-directional quantum field theory renormalization flows. All phenomena are "equally fundamental" at the Planck scale, there is no clear explanatory direction over there anymore. People unavoidably study effects at distances that no longer shrink (you shouldn't shrink beneath the Planck length) and they study increasingly complex behavior (starting with the entanglement) of the building blocks that are roughly Planckian in size.

A viewer who has observed the exchange carefully could have seen some additional, funny subtle differences between Greene and Strominger. Greene wanted to introduce black hole thermodynamics with stories worshiping John Wheeler. It was all about Wheeler and... about his student Jacob Bekenstein. Meanwhile, Andy Strominger responded by completely ignoring this thread and talking about his late friend Hawking only. Strominger told us that Hawking had a tomb with the Bekenstein-Hawking entropy on it.



Well, Andy has conflated Boltzmann's and Hawking's tomb. Boltzmann has \(S=k\cdot \log W\) over there but Hawking has the temperature formula\[

T = \frac{\hbar c^3}{8\pi G M k}

\] on his grave, along with "here lies what was mortal of Stephen Hawking, 1842-2018", indicating that most of Stephen Hawking is immortal and eternal. I just wanted to return some rigor to the discussion of graves here, Andy.

All the guests turned out to be amusing narrators but it's pretty funny that Andy Strominger ended up as the most bullish and enthusiastic guest. Why? Fifteen years ago, Andy was invited to record some monologues for the PBS NOVA TV shows of Brian Greene, much like Cumrun Vafa and others. Vafa was totally excited about the TV tricks. They made his head explode! He watched it with his two sons, and when Vafa's and Bugs Bunny's heads exploded, all three boys were happy.

(Vafa has 2 sons, Strominger has 4 daughters. If this perfect correlation were the rule in Israel and Iran, you could be optimistic about the fudging happy end of the conflict of the two countries LOL.)

On the other hand, those 15 years ago, Andy Strominger was less enthusiastic because all his clips were removed from the show. Maverick Strominger's comments were insufficiently enthusiastic for Greene's simple, pre-determined, bullish tone. So now, when maverick Strominger is the cheerleader-in-chief, you may be pretty sure that most people speak rather negatively and pump lots of the disillusion into the discourse.

by Luboš Motl (noreply@blogger.com) at August 07, 2019 06:47 PM

Axel Maas - Looking Inside the Standard Model

Making connections
Over time, it has happened that some solution in one area of physics could also be used in a quite different area. Or, at least, inspired the solution. Unfortunately, this does not always work. Even quite often it happened that when reaching the finer points it turns out that something promising did in the end not work. Thus, it pays off to be always careful with such a transfer, and never believe a hype. Still, in some cases it worked, and even lead to brilliant triumphs. And so it is always worthwhile to try.

Such an attempt is precisely the content of my latest paper. In it, I try to transfer ideas from my research on electroweak physics and the Brout-Englert-Higgs effect to quantum gravity. Quantum gravity is first and foremost still an unsolved issue. We know that mathematical consistency demands that there is some unification of quantum physics and gravity. We expect that this will be by having a quantum theory of gravity. Though we are yet lacking any experimental evidence for this assumption. Still, I also make the assumption for now that quantum gravity exists.

Based on this assumption, I take a candidate for such a quantum gravity theory and pose the question what are its observable consequences. This is a question which has driven me since a long time in particle physics. I think that by now I have an understanding of how it works. But last year, I was challenged whether these ideas can still be right if there is gravity in the game. And this new paper is essentially my first step towards an answerhttps://arxiv.org/abs/1908.02140. Much of this answer is still rough, and especially mathematically will require much work. But at least it provides a first consistent picture. And, as advertised above, it draws from a different field.

The starting point is that the simplest version of quantum gravity currently considered is actually not that different from other theories in particle physics. It is a so-called gauge theory. As such, many of its fundamental objects, like the structure of space and time, are not really observable. Just like most of the elementary particles of the standard model, which is also a gauge theory, are not. Thus, we cannot see them directly in an experiment. In the standard model case, it was possible to construct observable particles by combining the elementary ones. In a sense, the particles we observe are bound states of the elementary particles. However, in electroweak physics one of the bound elementary particles totally dominates the rest, and so the whole object looks very similar to the elementary one, but not quite.

This works, because the Brout-Englert-Higgs effect makes it possible. The reason is that there is a dominating kind of not observable structure, the so-called Higgs condensate, which creates this effect. This is something coincidental. If the parameters of the standard model would be different, it would not work. But, luckily, our standard model has just the right parameter values.

Now, when looking at gravity around us, there is a very similar feature. While we have the powerful theory of general relativity, which describes how matter warps space, we rarely see this. Most of our universe behaves much simpler, because there is so little matter in it. And because the parameters of gravity are such that this warping is very, very small. Thus, we have again a dominating structure: A vacuum which is almost not warped.

Using this analogy and the properties of gauge theories, I figured out the following: We can use something like the Brout-Englert-Higgs effect in quantum gravity. And all observable particles must still be some kind of bound states. But they may now also include gravitons, the elementary particles of quantum gravity. But just like in the standard model, these bound states are dominated by just one of its components. And if there is a standard model component it is this one. Hence, the particles we see at LHC will essentially look like there is no gravity. And this is very consistent with experiment. Detecting the deviations will be so hard in comparison to those which come from the standard model, we can pretty much forget about it for earthbound experiments. At least for the next couple of decades.

However, there are now also some combinations of gravitons without standard model particles involved. Such objects have been long speculated about, and are called geons, or gravity balls. But in contrast to the standard model case, they are not stable classically. But they may be stabilized due to quantum effects. The bound state structure strongly suggests that there is at least one stable one. Still, this is pure speculation at the moment. But if they are, these objects could have dramatic consequences. E.g., they could be part of the dark matter we are searching for. Or, they could make up black holes very much like neutrons make a neutron star. I have no idea, whether any of these speculations could be true. But if there is only a tiny amount of truth in it, this could be spectacular.

Thus, some master students and I will set out to have a look at these ideas. To this end, we will need to some hard calculations. And, eventually, the results should be tested against observation. These will be coming form the universe, and from astronomy. Especially from the astronomy of black holes, where recently there have been many interesting and exciting developments, like observing two black holes merge, or the first direct image of a black hole (obviously just black inside a kind of halo). These are exciting times, and I am looking forward to see whether any of these ideas work out. Stay tuned!

by Axel Maas (noreply@blogger.com) at August 07, 2019 08:37 AM

August 06, 2019

ZapperZ - Physics and Physicists

Light Drags Electrons Backward?
As someone who was trained in condensed matter physics, and someone who also worked in photoemmission, light detectors, and photoelectron sources, research work on light interaction with solids, and especially with metallic surfaces, is something I tend to follow rather closely.

I've been reading this article for the past few days and it gets fascinating each time. This is a report on a very puzzling photon drag effect in metals, or in this case, on gold, which is the definitive Drude metal if there is any. What is puzzling is not the photon drag on the conduction electron itself. What is puzzling is that the direction of the photon drag appears to be completely reversed between the effect seen in vacuum versus in ambient air.

A review of the paper can be found here. If you don't have access to PRL, the arXiv version of the paper can be found here. So it appears as if that, when done in vacuum, light appears to push the conduction electrons backward, while when done in air, it pushes electrons forward as expected.

As they varied the angle, the team measured a voltage that largely agreed with theoretical expectations based on the simple light-pushing-electrons picture. However, the voltage they measured was the opposite of that expected, implying that the current flow was in the wrong direction. It’s a weird effect," says Strait. “It’s as if the electrons are somehow managing to flow backward when hit by the light.”
Certainly, surface effects may be at play here. And those of us who have done photoemission spectroscopy can tell you all about surface reconstruction, even in vacuum, when a freshly-cleaved surface literally changes characteristics right in front of your eyes as you continually perform a measurement on it. So I am not surprised by the differences detected between vacuum and in-air measurement.

But what is very puzzling is the dramatic difference here, and why light appears to push the conduction electrons one way in air, and in the opposite direction in vacuum. I fully expect more experiments on this, and certainly more theoretical models to explain this puzzling observation.

This is just one more example where, as we apply our knowledge to the edge of what we know, we start finding new mysteries to solve or to explain. Light interaction with matter is one of the most common and understood phenomena. Light interaction with metals is the basis of the photoelectric effect. Yet, as we push the boundaries of our knowledge, and start to look at very minute details due to its application in, say, photonics, we also start to see the new things that we do not expect.

It is why I always laugh whenever someone thinks that there is an "end of physics". Even on the things that we think we know or things that are very common, if we start to make better and more sensitive measurement, I don't doubt that we will start finding something else that we have not anticipated.

Zz.

by ZapperZ (noreply@blogger.com) at August 06, 2019 02:15 PM

Matt Strassler - Of Particular Significance

A Catastrophic Weekend for Theoretical High Energy Physics

It is beyond belief that not only am I again writing a post about the premature death of a colleague whom I have known for decades, but that I am doing it about two of them.

Over the past weekend, two of the world’s most influential and brilliant theoretical high-energy physicists — Steve Gubser of Princeton University and Ann Nelson of the University of Washington — fell to their deaths in separate mountain accidents, one in the Alps and one in the Cascades.

Theoretical high energy physics is a small community, and within the United States itself the community is tiny.  Ann and Steve were both justifiably famous and highly respected as exceptionally bright lights in their areas of research. Even for those who had not met them personally, this is a stunning and irreplaceable loss of talent and of knowledge.

But most of us did know them personally.  For me, and for others with a personal connection to them, the news is devastating and tragic. I encountered Steve when he was a student and I was a postdoc in the Princeton area, and later helped bring him into a social group where he met his future wife (a great scientist in her own right, and a friend of mine going back decades).  As for Ann, she was one of my teachers at Stanford in graduate school, then my senior colleague on four long scientific papers, and then my colleague (along with her husband David B. Kaplan) for five years at the University of Washington, where she had the office next to mine. I cannot express what a privilege it always was to work with her, learn from her, and laugh with her.

I don’t have the heart or energy right now to write more about this, but I will try to do so at a later time. Right now I join their spouses and families, and my colleagues, in mourning.

by Matt Strassler at August 06, 2019 12:35 PM

August 03, 2019

ZapperZ - Physics and Physicists

Einstein's Blunder Explained
This, actually, is a good and quick summary of the Einstein cosmological equation by Minute Physics. You'll get a brief history of the cosmological constant, and how it came back to life.



Zz.

by ZapperZ (noreply@blogger.com) at August 03, 2019 12:59 PM

July 29, 2019

Clifford V. Johnson - Asymptotia

News from the Front XIX: A-Masing de Sitter

[caption id="attachment_19335" align="alignright" width="215"] Diamond maser. Image from Jonathan Breeze, Imperial College[/caption]This is part 2 of a chat about some recent thoughts and results I had about de Sitter black holes, reported in this arxiv preprint. Part 1 is here, so maybe best to read that first.

Now let us turn to de Sitter black holes. I mean here any black hole for which the asymptotic spacetime is de Sitter spacetime, which is to say it has positive cosmological constant. This is of course also interesting since one of the most natural (to some minds) possible explanations for the accelerating expansion of our universe is a cosmological constant, so maybe all black holes in our universe are de Sitter black holes in some sense. This is also interesting because you often read here about explorations of physics involving negative cosmological constant, so this is a big change!

One of the things people find puzzling about applying the standard black hole thermodynamics is that there are two places where the standard techniques tell you there should be a temperature associated with them. There's the black hole horizon itself, and there's also the cosmological horizon. These each have temperature, and they are not necessarily the same. For the Schwarzschild-de Sitter black hole, for example, (so, no spins or charges... just a mass with an horizon associated with it, like in flat space), the black hole's temperature is always larger than that of the cosmological horizon. In fact, it runs from very large (where the black hole is small) all the way (as the black hole grows) to zero, where the two horizons coincide.

You might wonder, as many have, how to make sense of the two temperatures. This cannot, for a start, be an equilibrium thermodynamics system. Should there be dynamics where the two temperatures try to equalise? Is there heat flow from one horizon to another, perhaps? Maybe there's some missing ingredient needed to make sense of this - do we have any right to be writing down temperatures (an equilibrium thermodynamics concept, really) when the system is not in equilibrium? (Actually, you could ask that about Schwarzschild in flat space - you compute the temperature and then discover that it depends upon the mass in such a way that the system wants to move to a different temperature. But I digress.)

The point of my recent work is that it is entirely within the realm of physics we have to hand to make sense of this. The simple system described in the previous post - the three level maser - has certain key interconnected features that seem relevant:

  • admits two distinct temperatures and
  • a maximum energy, and
  • a natural instability (population inversion) and a channel for doing work - the maser output.

My point is that these features are all present for de Sitter black holes too, starting with the two temperatures. But you won't see the rest by staring at just the Schwarzschild case, you need to add rotation, or charge (or both). As we shall see, the ability to reduce angular momentum, or to reduce charge, will be the work channel. I'll come back to the maximum [...] Click to continue reading this post

The post News from the Front XIX: A-Masing de Sitter appeared first on Asymptotia.

by Clifford at July 29, 2019 06:03 PM

July 26, 2019

Clifford V. Johnson - Asymptotia

News from the Front, XVIII: de Sitter Black Holes and Continuous Heat Engines

[caption id="attachment_19313" align="alignright" width="250"] Hubble photo of jupiter's aurorae.[/caption]Another title for this could be "Making sense of de Sitter black hole thermodynamics", I suppose. What I'm going to tell you about is either a direct correspondence or a series of remarkable inspiring coincidences. Either way, I think you will come away agreeing that there is certainly something interesting afoot.

It is an idea I'd been tossing around in my head from time to time over years, but somehow did not put it all together, and then something else I was working on years later, that was seemingly irrelevant, helped me complete the puzzle, resulting in my new paper, which (you guessed it) I'm excited about.

It all began when I was thinking about heat engines, for black holes in anti-de Sitter, which you may recall me talking about in posts here, here, and here, for example. Those are reciprocating heat engines, taking the system through a cycle that -through various stages- takes in heat, does work, and exhausts some heat, then repeats and repeats. And repeats.

I've told you the story about my realisation that there's this whole literature on quantum heat engines that I'd not known about, that I did not even know of a thing called a quantum heat engine, and my wondering whether my black hole heat engines could have a regime where they could be considered quantum heat engines, maybe enabling them to be useful tools in that arena...(resulting in the paper I described here)... and my delight in combining 18th Century physics with 21st Century physics in this interesting way.

All that began back in 2017. One thing I kept coming back to that really struck me as lovely is what can be regarded as the prototype quantum heat engine. It was recognized as such as far back as 1959!! It is a continuous heat engine, meaning that it does its heat intake and work and heat output all at the same time, as a continuous flow. It is, in fact a familiar system - the three-level maser! (a basic laser also uses the key elements).

A maser can be described as taking in energy as heat from an external source, and giving out energy in the form of heat and work. The work is the desired [...] Click to continue reading this post

The post News from the Front, XVIII: de Sitter Black Holes and Continuous Heat Engines appeared first on Asymptotia.

by Clifford at July 26, 2019 03:44 PM

July 25, 2019

Axel Maas - Looking Inside the Standard Model

Talking about the same thing
In this blog entry I will try to explain my most recent paper. The theme of the paper is rather simply put: You should not compare apple with oranges. The subtlety comes from knowing whether you have an apple or an orange in your hand. This is far less simple than it sounds.

The origin of the problem are once more gauge theories. In gauge theories, we have introduced additional degrees of freedom. And, in fact, we have a choice of how we do this. Of course, our final results will not depend on the choice. However, getting to the final result is not always easy. Thus, ensuring that the intermediate steps are right would be good. But they depend on the choice. But then they are only comparable between two different calculations, if in both calculations the same choice is made.

Now it seems simple at first to make the same choice. Ultimately, it is our choice, right? But this is actually not that easy in such theories, due to their mathematical complexity. Thus, rather than making the choice explicit, the choice is made implicitly. The way how this is done is, again for technical reasons, different for methods. And because of all of these technicalities and the fact that we need to do approximations, figuring out whether the implicit conditions yield the same explicit choice is difficult. This is especially important as the choice modifies the equations describing our auxiliary quantities.

In the paper I test this. If everything is consistent between two particular methods, then the solutions obtained in one method should be a solution to the equations obtained in the other method. Seems a simple enough idea. There had been various arguments in the past which suggested that this should be he case. But there had been more and more pieces of evidence over the last couple of years that led me to think that there was something amiss. So I made this test, and did not rely on the arguments.

And indeed, what I find in the article is that the solution of one method does not solve the equation from the other method. The way how this happens strongly suggests that the implicit choices made are not equivalent. Hence, the intermediate results are different. This does not mean that they are wrong. They are just not comparable. Either method can still yield in itself consistent results. But since neither of the methods are exact, the comparison between both would help reassure that the approximations made make sense. And this is now hindered.

So, what to do now? We would very much like to have the possibility to compare between different methods at the level of the auxiliary quantities. So this needs to be fixed. This can only be achieved if the same choice is made in all the methods. The though question is, in which method we should work on the choice. Should we try to make the same choice as in some fixed of the methods? Should we try to find a new choice in all methods? This is though, because everything is so implicit, and affected by approximations.

At the moment, I think the best way is to get one of the existing choices to work in all methods. Creating an entirely different one for all methods appears to me far too much additional work. And I, admittedly, have no idea what a better starting point would be than the existing ones. But in which method should we start trying to alter the choice? In neither method this seems to be simple. In both cases, fundamental obstructions are there, which need to be resolved. I therefore would currently like to start poking around in both methods. Hoping that there maybe a point in between where the choices of the methods could meet, which is easier than to push all all the way. I have a few ideas, but they will take time. Probably also a lot more than just me.

This investigation also amazes me as the theory where this happens is nothing new. Far from it, it is more than half a century old, older than I am. And it is not something obscure, but rather part of the standard model of particle physics. So a very essential element in our description of nature. It never ceases to baffle me, how little we still know about it. And how unbelievable complex it is at a technical level.

by Axel Maas (noreply@blogger.com) at July 25, 2019 08:27 AM

July 08, 2019

Sean Carroll - Preposterous Universe

Spacetime and Geometry: Now at Cambridge University Press

Hard to believe it’s been 15 years since the publication of Spacetime and Geometry: An Introduction to General Relativity, my graduate-level textbook on everyone’s favorite theory of gravititation. The book has become quite popular, being used as a text in courses around the world. There are a lot of great GR books out there, but I felt another one was needed that focused solely on the idea of “teach students general relativity.” That might seem like an obvious goal, but many books also try to serve as reference books, or to put forward a particular idiosyncratic take on the subject. All I want to do is to teach you GR.

And now I’m pleased to announce that the book is changing publishers, from Pearson to Cambridge University Press. Even with a new cover, shown above.

I must rush to note that it’s exactly the same book, just with a different publisher. Pearson was always good to me, I have no complaints there, but they are moving away from graduate physics texts, so it made sense to try to find S&G a safe permanent home.

Well, there is one change: it’s cheaper! You can order the book either from CUP directly, or from other outlets such as Amazon. Copies had been going for roughly $100, but the new version lists for only $65 — and if the Amazon page is to be believed, it’s currently on sale for an amazing $46. That’s a lot of knowledge for a minuscule price. I’d rush to snap up copies for you and your friends, if I were you.

My understanding is that copies of the new version are not quite in stores yet, but they’re being printed and should be there momentarily. Plenty of time for courses being taught this Fall. (Apologies to anyone who has been looking for the book over the past couple of months, when it’s been stuck between publishers while we did the handover.)

Again: it’s precisely the same book. I have thought about doing revisions to produce an actually new edition, but I think about many things, and that’s not a super-high priority right now. Maybe some day.

Thanks to everyone who has purchased Spacetime and Geometry over the years, and said such nice things about it. Here’s to the next generation!

by Sean Carroll at July 08, 2019 08:03 PM

June 19, 2019

Axel Maas - Looking Inside the Standard Model

Creativity in physics
One of the most widespread misconceptions about physics, and other natural sciences, is that they are quite the opposite to art: Precise, fact-driven, logical, and systematic. While art is perceived as emotional, open, creative, and inspired.

Of course, physics has experiments, has data, has math. All of that has to be fitted perfectly together, and there is no room for slights. Logical deduction is central in what we do. But this is not all. In fact, these parts are more like the handiwork. Just like a painter needs to be able to draw a line, a writer needs to be able to write coherent sentences, so we need to be able to calculate, build, check, and infer. But just like the act of drawing a line or writing a sentence is not what we recognize already as art, so is not the solving of an equation physics.

We are able to solve an equation, because we learned this during our studies. We learned, what was known before. Thus, this is our tool set. Like people read books before start writing one. But when we actually do research, we face the fact that nobody knows what is going on. In fact, quite often we do not even know what is an adequate question to pose. We just stand there, baffled, before a couple of observations. That is, where the same act of creativity has to set in as when writing a book or painting a picture. We need an idea, need inspiration, on how to start. And then afterwards, just like the writer writes page after page, we add to this idea various pieces, until we have a hypotheses of what is going on. This is like having the first draft of a book. Then, the real grinding starts, where all our education comes to bear. Then we have to calculate and so on. Just like the writer has to go and fix the draft to become a book.

You may now wonder whether this part of creativity is only limited to the great minds, and at the inception of a whole new step in physics? No, far from it. On the one hand, physics is not the work of lone geniuses. Sure, somebody has occasionally the right idea. But this is usually just the one idea, which is in the end correct, and all the other good ideas, which other people had, did just turn out to be incorrect, and you never hear of them because of this. And also, on the other hand, every new idea, as said above, requires eventually all that what was done before. And more than that. Creativity is rarely borne out of being a hermit. It is often by inspiration due to others. Talking to each other, throwing fragments of ideas at each other, and mulling about consequences together is what creates the soil where creativity sprouts. All those, with whom you have interacted, have contributed to the idea you have being born.

This is, why the genuinely big breakthroughs have often resulted from so-called blue-sky research or curiosity-driven research. It is not a coincidence that the freedom of doing whatever kind of research you think is important is an, almost sacred, privilege of hired scientists. Or should be. Fortunately I am privileged enough, especially in the European Union, to have this privilege. In other places, you are often shackled by all kinds of external influences, down to political pressure to only do politically acceptable research. And this can never spark the creativity you need to make something genuine new. If you are afraid about what you say, you start to restrain yourself, and ultimately anything which is not already established to be acceptable becomes unthinkable. This may not always be as obvious as real political pressure. But if whether you being hired, if your job is safe, starts to depend on it, you start going for acceptable research. Because failure with something new would cost you dearly. And with the currently quite common competitive funding prevalent particularly for non-permanently hired people, this starts to become a serious obstruction.

As a consequence, real breakthrough research can be neither planned nor can you do it on purpose. You can only plan the grinding part. And failure will be part of any creative process. Though you actually never really fail. Because you always learn how something does not work. That is one of the reasons why I strongly want that failures become also publicly available. They are as important to progress as success, by reducing the possibilities. Not to mention the amount of life time of researchers wasted because they fail with them same attempt, not knowing that others failed before them.

And then, perhaps, a new scientific insight arises. And, more often than not, some great technology arises along the way. Not intentionally, but because it was necessary to follow one's creativity. And that is actually where most technological leaps came from. So,real progress in physics, in the end, is made from about a third craftsmanship, a third communication, and a third creativity.

So, after all this general stuff, how do I stay creative?

Well, first of all, I was and am sufficiently privileged. I could afford to start out with just following my ideas, and either it will keep me in business, or I will have to find a non-science job. But this only worked out because of my personal background, because I could have afforded to have a couple of months with no income to find a job, and had an education which almost guarantees me a decent job eventually. And the education I could only afford in this quality because of my personal background. Not to mention that as a white male I had no systemic barriers against me. So, yes, privilege plays a major role.

The other part was that I learned more and more that it is not effort what counts, but effect. Took me years. But eventually, I understood that a creative idea cannot be forced by burying myself in work. Time off is for me as important. It took me until close to the end of my PhD to realize that. But not working overtime, enjoying free days and holidays, is for me as important for the creative process as any other condition. Not to mention that I also do all non-creative chores much more efficiently if well rested, which eventually leaves me with more time to ponder creatively and do research.

And the last ingredient is really exchange. I have had now the opportunity, in a sabbatical, to go to different places and exchange ideas with a lot of people. This gave me what I needed to acquire a new field and have already new ideas for it. It is the possibility to sit down with people for some hours, especially in a nicer and more relaxing surrounding than an office, and just discuss ideas. That is also what I like most about conferences. And one of the reasons I think conferences will always be necessary, even though we need to make going there and back ecologically much more viable, and restrict ourselves to sufficiently close ones until this is possible.

Sitting down over a good cup of coffee or a nice meal, and just discuss, is really jump starting my creativity. Even sitting with a cup of good coffee in a nice cafe somewhere and just thinking does wonders for me in solving problems. And with that, it seems not to be so different for me than for artists, after all.

by Axel Maas (noreply@blogger.com) at June 19, 2019 02:53 PM

June 18, 2019

Marco Frasca - The Gauge Connection

Cracks in the Witten’s index theorem?

In these days, a rather interesting paper (see here for the preprint) appeared on Physical Review Letters. These authors study a Wess-Zumino model for {\cal N}=1, the prototype of any further SUSY model, and show that there exists an anomaly at one loop in perturbation theory that breaks supersymmetry. This is rather shocking as the model is supersymmetric at the classical level and, in agreement with Witten’s index theorem, no breaking of supersymmetry should ever be observed. Indeed, the authors, in the conclusions, correctly ask how the Witten’s theorem copes with this rather strange behavior. Of course, Witten’s theorem is correct and the question comes out naturally and is very much interesting for further studies.

This result is important as I have incurred in a similar situation for the Wess-Zumino model in a couple of papers. The first one (see here and here)  went published and shows how the classical Wess-Zumino model, in a strong coupling regime, breaks supersymmetry. Therefore, I asked a similar question as for the aforementioned case: How quantum corrections recover the Witten’s theorem? The second one is remained a preprint (see here). I tried to send it to Physics Letters B but the referee, without any check of mathematics, just claimed that there was the Witten’s theorem to forbid my conclusions. The Editor asked me to withdraw the paper in view of this identical reason. This was a very strong one. So, I never submited this paper again and just checked the classical case where I was more lucky.

So, my question is still alive: Has supersymmetry in itself the seeds of its breaking?

This is really important in view of the fact that the Minimal Supersymmetric Standard Model (MSSM), now in disgrace after LHC results, can have a dark side in its soft supersymmetry breaking sector. This, in turn, could entail a wrong understanding of where the superpartners could be after the breaking. Anyway, it is really something exciting already at the theoretical level. We are just stressing Witten’s index theorem in search for answers.

by mfrasca at June 18, 2019 03:06 PM

June 14, 2019

Matt Strassler - Of Particular Significance

A Ring of Controversy Around a Black Hole Photo

[Note Added: Thanks to some great comments I’ve received, I’m continuing to add clarifying remarks to this post.  You’ll find them in green.]

It’s been a couple of months since the `photo’ (a false-color image created to show the intensity of radio waves, not visible light) of the black hole at the center of the galaxy M87, taken by the Event Horizon Telescope (EHT) collaboration, was made public. Before it was shown, I wrote an introductory post explaining what the ‘photo’ is and isn’t. There I cautioned readers that I thought it might be difficult to interpret the image, and controversies about it might erupt.EHTDiscoveryM87

So far, the claim that the image shows the vicinity of M87’s black hole (which I’ll call `M87bh’ for short) has not been challenged, and I’m not expecting it to be. But what and where exactly is the material that is emitting the radio waves and thus creating the glow in the image? And what exactly determines the size of the dark region at the center of the image? These have been problematic issues from the beginning, but discussion is starting to heat up. And it’s important: it has implications for the measurement of the black hole’s mass (which EHT claims is that of 6.5 billion Suns, with an uncertainty of about 15%), and for any attempt to estimate its rotation rate.

Over the last few weeks I’ve spent some time studying the mathematics of spinning black holes, talking to my Harvard colleagues who are world’s experts on the relevant math and physics, and learning from colleagues who produced the `photo’ and interpreted it. So I think I can now clearly explain what most journalists and scientist-writers (including me) got wrong at the time of the photo’s publication, and clarify what the photo does and doesn’t tell us.

One note before I begin: this post is long. But it starts with a summary of the situation that you can read quickly, and then comes the long part: a step-by-step non-technical explanation of an important aspect of the black hole ‘photo’ that, to my knowledge, has not yet been given anywhere else.

[I am heavily indebted to Harvard postdocs Alex Lupsasca and Shahar Hadar for assisting me as I studied the formulas and concepts relevant for fast-spinning black holes. Much of what I learned comes from early 1970s papers, especially those by my former colleague Professor Jim Bardeen (see this one written with Press and Teukolsky), and from papers written in the last couple of years, especially this one by my present and former Harvard colleagues.]

What Does the EHT Image Show?

Scientists understand the black hole itself — the geometric dimple in space and time — pretty well. If one knows the mass and the rotation rate of the black hole, and assumes Einstein’s equations for gravity are mostly correct (for which we have considerable evidence, for example from LIGO measurements and elsewhere), then the equations tell us what the black hole does to space and time and how its gravity works.

But for the `photo’, ​that’s not enough information. We don’t get to observe the black hole itself (it’s black, after all!) What the `photo’ shows is a blurry ring of radio waves, emitted from hot material (a plasma of mostly electrons and protons) somewhere around the black hole — material whose location, velocity, and temperature we do not know. That material and its emission of radio waves are influenced by powerful gravitational forces (whose details depend on the rotation rate of the M87bh, which we don’t know yet) and powerful magnetic fields (whose details we hardly know at all.) The black hole’s gravity then causes the paths on which the radio waves travel to bend, even more than a glass lens will bend the path of visible light, so that where things appear in the ‘photo’ is not where they are actually located.

The only insights we have into this extreme environment come from computer simulations and a few other `photos’ at lower magnification. The simulations are based on well-understood equations, but the equations have to be solved approximately, using methods that may or may not be justified. And the simulations don’t tell you where the matter is; they tell you where the material will go, but only after you make a guess as to where it is located at some initial point in time. (In the same sense: computers can predict the national weather tomorrow only when you tell them what the national weather was yesterday.) No one knows for sure how accurate or misleading these simulations might be; they’ve been tested against some indirect measurements, but no one can say for sure what flaws they might have.

However, there is one thing we can certainly say, and it has just been said publicly in a paper by Samuel Gralla, Daniel Holz and Robert Wald.

Two months ago, when the EHT `photo’ appeared, it was widely reported in the popular press and on blogs that the photo shows the image of a photon sphere at the edge of the shadow of the M87bh. (Instead of `shadow’, I suggested the term ‘quasi-silhouette‘, which I viewed as somewhat less misleading to a non-expert.)

Unfortunately, it seems these statements are not true; and this was well-known to (but poorly communicated by, in my opinion) the EHT folks.  This lack of clarity might perhaps annoy some scientists and science-loving non-experts; but does this issue also matter scientifically? Gralla et al., in their new preprint, suggest that it does (though they were careful to not yet make a precise claim.)

The Photon Sphere Doesn’t Exist

Indeed, if you happened to be reading my posts carefully when the `photo’ first appeared, you probably noticed that I was quite vague about the photon-sphere — I never defined precisely what it was. You would have been right to read this as a warning sign, for indeed I wasn’t getting clear explanations of it from anyone. Studying the equations and conversing with expert colleagues, I soon learned why: for a rotating black hole, the photon sphere doesn’t really exist.

But let’s first define what the photon sphere is for a non-rotating black hole! Like the Earth’s equator, the photon sphere is a location, not an object. This location is the surface of an imaginary ball, lying well outside the black hole’s horizon. On the photon sphere, photons (the particles that make up light, radio waves, and all other electromagnetic waves) travel on special circular or spherical orbits around the black hole.

By contrast, a rotating black hole has a larger, broader `photon-zone’ where photons can have special orbits. But you won’t ever see the whole photon zone in any image of a rotating black hole. Instead, a piece of the photon zone will appear as a `photon ring‘, a bright and very thin loop of radio waves. However, the photon ring is not the edge of anything spherical, is generally not perfectly circular, and generally is not even perfectly centered on the black hole.

… and the Photon Ring Isn’t What We See…

It seems likely that the M87bh is rotating quite rapidly, so it has a photon-zone rather than a photon-sphere, and images of it will have a photon ring. Ok, fine; but then, can we interpret EHT’s `photo’ simply as showing the photon ring, blurred by the imperfections in the `telescope’? Although some of the EHT folks have seemed to suggest the answer is “yes”, Gralla et al. suggest the answer is likely “no” (and many of their colleagues have been pointing out the same thing in private.) The circlet of radio waves that appears in the EHT `photo’ is probably not simply a blurred image of M87bh’s photon ring; it probably shows a combination of the photon ring with something brighter (as explained below). That’s where the controversy starts.

…so the Dark Patch May Not Be the Full Shadow…

The term `shadow’ is confusing (which is why I prefer `quasi-silhouette’ in describing it in public contexts, though that’s my own personal term) but no matter what you call it, in its ideal form it is supposed to be an absolutely dark area whose edge is the photon ring. But in reality the perfectly dark area need not appear so dark after all; it may be partly filled in by various effects. Furthermore, since the `photo’ may not show us the photon ring, it’s far from clear that the dark patch in the center is the full shadow anyway. The EHT folks are well aware of this, but at the time the photo came out, many science writers and scientist-writers (including me) were not.

…so EHT’s Measurement of the M87bh’s Mass is Being Questioned

It was wonderful that EHT could make a picture that could travel round the internet at the speed of light, and generate justifiable excitement and awe that human beings could indirectly observe such an amazing thing as a black hole with a mass of several billion Sun-like stars. Qualitatively, they achieved something fantastic in showing that yes, the object at the center of M87 really is as compact and dark as such a black hole would be expected to be! But the EHT telescope’s main quantitative achievement was a measurement of the mass of the M87bh, with a claimed precision of about 15%.

Naively, one could imagine that the mass is measured by looking at the diameter of the dark spot in the black hole ‘photo’, under the assumption that it is the black hole’s shadow. So here’s the issue: Could interpreting the dark region incorrectly perhaps lead to a significant mistake in the mass measurement, and/or an underestimate of how uncertain the mass measurement actually is?

I don’t know.  The EHT folks are certainly aware of these issues; their simulations show them explicitly.  The mass of the M87bh isn’t literally measured by putting a ruler on the ‘photo’ and measuring the size of the dark spot! The actual methods are much more sophisticated than that, and I don’t understand them well enough yet to explain, evaluate or criticize them. All I can say with confidence right now is that these are important questions that experts currently are debating, and consensus on the answer may not be achieved for quite a while.

———————————————————————-

The Appearance of a Black Hole With Nearby Matter

Ok, now I’m going to explain the most relevant points, step-by-step. Grab a cup of coffee or tea, find a comfy chair, and bear with me.

Because fast-rotating black holes are more complicated, I’m going to start illuminating the controversy by looking at a non-rotating black hole’s properties, which is also what Gralla et al. mainly do in their paper. It turns out the qualitative conclusion drawn from the non-rotating case largely applies in the rotating case too, at least in the case of the M87bh as seen from our perspective; that’s important because the M87bh may well be rotating at a very good clip.

A little terminology first: for a rotating black hole there’s a natural definition of the poles and the equator, just as there is for the Earth: there’s an axis of rotation, and the poles are where that axis intersects with the black hole horizon. The equator is the circle that lies halfway between the poles. For a non-rotating black hole, there’s no such axis and no such automatic definition, but it will be useful to define the north pole of the black hole to be the point on the horizon closest to us.

A Single Source of Electromagnetic Waves

Let’s imagine placing a bright light bulb on the same plane as the equator, outside the black hole horizon but rather close to it. (The bulb could emit radio waves or visible light or any other form of electromagnetic waves, at any frequency; for what I’m about to say, it doesn’t matter at all, so I’ll just call it `light’.) See Figure 1. Where will the light from the bulb go?

Some of it, heading inward, ends up in the black hole, while some of it heads outward toward distant observers. The gravity of the black hole will bend the path of the light. And here’s something remarkable: a small fraction of the light, aimed just so, can actually spiral around the black hole any number of times before heading out. As a result, you will see the bulb not once but multiple times!

There will be a direct image — light that comes directly to us — from near the bulb’s true location (displaced because gravity bends the light a bit, just as a glass lens will distort the appearance of what’s behind it.) That path of that light is the orange arrow in Figure 1. But then there will be an indirect image (the green arrow in Figure 1) from light that goes halfway around the black hole before heading in our direction; we will see that image of the bulb on the opposite side of the black hole. Let’s call that the `first indirect image.’ Then there will be a second indirect image from light that orbits the black hole once and comes out near the direct image, but further out; that’s the blue arrow in Figure 1. Then there will be a third indirect image from light that goes around one and a half times (not shown), and so on. In short, Figure 1 shows the paths of the direct, first indirect, and second indirect images of the bulb as they head toward our location at the top of the image.

BHTruthBulb.png

Figure 1: A light bulb (yellow) outside but near the non-rotating black hole’s horizon (in black) can be seen by someone at the top of the image not only through the light that goes directly upward (orange line) — a “direct image” — but also through light that makes partial or complete orbits of the black hole — “indirect images.” The first indirect and second indirect images are from light taking the green and blue paths. For light to make orbits of the black hole, it must travel near the grey-dashed circle that indicates the location of a “photon-sphere.” (A rotating black hole has no such sphere, but when seen from the north or south pole, the light observed takes similar paths to what is shown in this figure.) [The paths of the light rays were calculated carefully using Mathematica 11.3.]

What you can see in Figure 1 is that both the first and second indirect images are formed by light that spends part of its time close to a special radius around the back hole, shown as a dotted line. This imaginary surface, the edge of a ball,  is an honest “photon-sphere” in the case of a non-rotating black hole.

In the case of a rotating black hole, something very similar happens when you’re looking at the black hole from its north (or south) pole; there’s a special circle then too. But that circle is not the edge of a photon-sphere! In general, photons can have special orbits in a wide region, which I called the “photon-zone” earlier, and only a small set of them are on this circle. You’ll see photons from other parts of the photon zone if you look at the black hole not from the poles but from some other angle.

[If you’d like to learn a bit more about the photon zone, and you have a little bit of knowledge of black holes already, you can profit from exploring this demo by Professor Leo Stein: https://duetosymmetry.com/tool/kerr-circular-photon-orbits/ ]

Back to the non-rotating case: What our camera will see, looking at what is emitted from the light bulb, is shown in Figure 2: an infinite number of increasingly squished `indirect’ images, half on one side of the black hole near the direct image, and the other half on the other side. What is not obvious, but true, is that only the first of the indirect images is large and bright; this is one of Gralla et al.‘s main points. We can, therefore, separate the images into the direct image, the first indirect image, and the remaining indirect images. The total amount of light coming from the direct image and the first indirect image can be large, but the total amount of light from the remaining indirect images is typically (according to Gralla et al.) less than 5% of the light from the first indirect image. And so, unless we have an extremely high-powered camera, we’ll never pick those other images up. Let’s therefore focus our attention on the direct image and the first indirect image.

BHObsvBulb3.png

Figure 2: What the drawing in Figure 1 actually looks like to the observer peering toward the black hole; all the indirect images lie at almost exactly the same distance from the black hole’s center.

WARNING (since this seems to be a common confusion):

IN ALL MY FIGURES IN THIS POST, AS IN THE BLACK HOLE `PHOTO’ ITSELF, THE COLORS OF THE IMAGES ARE CHOSEN ARBITRARILY (as explained in my first blog post on this subject.) THE `PHOTO’ WAS TAKEN AT A SINGLE, NON-VISIBLE FREQUENCY OF ELECTROMAGNETIC WAVES: EVEN IF WE COULD SEE THAT TYPE OF RADIO WAVE WITH OUR EYES, IT WOULD BE A SINGLE COLOR, AND THE ONLY THING THAT WOULD VARY ACROSS THE IMAGE IS BRIGHTNESS. IN THIS SENSE, A BLACK AND WHITE IMAGE MIGHT BE CLEARER CONCEPTUALLY, BUT IT IS HARDER FOR THE EYE TO PROCESS.

A Circular Source of Electromagnetic Waves

Proceeding step by step toward a more realistic situation, let’s replace our ordinary bulb by a circular bulb (Figure 3), again set somewhat close to the horizon, sitting in the plane that contains the equator. What would we see now?

BHTruthCirc2.png

Figure 3: if we replace the light bulb with a circle of light, the paths of the light are the same as in Figure 1, except now for each point along the circle. That means each direct and indirect image itself forms a circle, as shown in the next figure.

That’s shown in Figure 4: the direct image is a circle (looking somewhat larger than it really is); outside it sits the first indirect image of the ring; and then come all the other indirect images, looking quite dim and all piling up at one radius. We’re going to call all those piled-up images the “photon ring”.

BHObsvCirc3.png

Figure 4: The circular bulb’s direct image is the bright circle, but a somewhat dimmer first indirect image appears further out, and just beyond one finds all the other indirect images, forming a thin `photon ring’.

Importantly, if we consider circular bulbs of different diameter [yellow, red and blue in Figure 5], then although the direct images reflect the differences in the bulbs’ diameters (somewhat enlarged by lensing), the first indirect images all are about the same diameter, just a tad larger or smaller than the photon ring.  The remaining indirect images all sit together at the radius of the photon ring.

BH3Circ4.png

Figure 5: Three bulbs of different diameter (yellow, blue, red) create three distinct direct images, but their first indirect images are located much closer together, and very close to the photon ring where all their remaining indirect images pile up.

These statements are also essentially true for a rotating black hole seen from the north or south pole; a circular bulb generates a series of circular images, and the indirect images all pile more or less on top of each other, forming a photon ring. When viewed off the poles, the rotating black hole becomes a more complicated story, but as long as the viewing angle is small enough, the changes are relatively minor and the picture is qualitatively somewhat similar.

A Disk as a Source of Electromagnetic Waves

And what if you replaced the circular bulb with a disk-shaped bulb, a sort of glowing pancake with a circular hole at its center, as in Figure 7? That’s relevant because black holes are thought to have `accretion disks’ made of material orbiting the black hole, and eventually spiraling in. The accretion disk may well be the dominant source emitting radio waves at the M87bh. (I’m showing a very thin uniform disk for illustration, but a real accretion disk is not uniform, changes rapidly as clumps of material move within it and then spiral into the black hole, and may be quite thick — as thick as the black hole is wide, or even thicker.)

Well, we can think of the disk as many concentric circles of light placed together. The direct images of the disk (shown in Figure 6 left, on one side of the disk, as an orange wash) would form a disk in your camera, the dim red region in Figure 6 right; the hole at its center would appear larger than it really is due to the bending caused by the black hole’s gravity, but the shape would be similar. However, the indirect images would all pile up in almost the same place from your perspective, forming a bright and quite thin ring, the bright yellow circle in Figure 6 right. (The path of the disk’s first indirect image is shown in Figure 6 left, going halfway about the black hole as a green wash; notice how it narrows as it travels, which is why it appears as a narrow ring in the image at right.) This circle — the full set of indirect images of the whole disk — is the edge of the photon-sphere for a non-rotating black hole, and the circular photon ring for a rotating black hole viewed from its north or south pole.

BHDisk2.png

Figure 6: A glowing disk of material (note it does not touch the black hole) looks like a version of Figure 5 with many more circular bulbs. The direct image of the disk forms a disk (illustrated at left, for a piece of the disk, as an orange wash) while the first indirect image becomes highly compressed (illustrated, for a piece of the disk, as a green wash) and is seen as a narrow circle of bright light.  (It is expected that the disk is mostly transparent in radio waves, so the indirect image can pass through it.) That circle, along with the other indirect images, forms the photon ring. In this case, because the disk’s inner edge lies close to the black hole horizon, the photon ring sits within the disk’s direct image, but we’ll see a different example in Figure 9.

[Gralla et al. call the first indirect image the `lensed ring’ and the remaining indirect images, currently unobservable at EHT, the `photon ring’, while EHT refers to all the indirect images as the `photon ring’. Just letting you know in case you hear `lensed ring’ referred to in future.]

So the conclusion is that if we had a perfect camera, the direct image of a disk makes a disk, but the indirect images (mainly just the first one, as Gralla et al. emphasize) make a bright, thin ring that may be superposed upon the direct image of the disk, depending on the disk’s shape.

And this conclusion, with some important adjustments, applies also for a spinning black hole viewed from above its north or south pole — i.e., along its axis of rotation — or from near that axis; I’ll mention the adjustments in a moment.

But EHT is not a perfect camera. To make the black hole image, technology had to be pushed to its absolute limits. Someday we’ll see both the disk and the ring, but right now, they’re all blurred together. So which one is more important?

From a Blurry Image to Blurry Knowledge

What does a blurry camera do to this simple image? You might think that the disk is so dim and the ring so bright that the camera will mainly show you a blurry image of the bright photon ring. But that’s wrong. The ring isn’t bright enough. A simple calculation reveals that the ​photo will show mainly the disk, not the photon ring! This is shown in Figure 9, which you can compare with the Black Hole `photo’ (Figure 10). (Figure 9 is symmetric around the ring, but the photo is not, for multiple reasons — Doppler-like effect from rotation, viewpoint off the rotation axis, etc. — which I’ll have to defer til another post.)

More precisely, the ring and disk blur together, but the brightness of the image is dominated by the disk, not the ring.

BHBlurDisk_a1_2.png

Figure 7: At left is repeated the image in Figure 6, as seen in a perfect camera, while at right the same image is shown when observed using a camera with imperfect vision. The disk and ring blur together into a single thick ring, whose brightness is dominated by the disk. Note that the shadow — the region surrounded by the yellow photon ring — is not the same as the dark patch in the right-hand image; the dark patch is considerably smaller than the shadow.

Let’s say that again: the black hole `photo’ may mainly show the M87bh’s accretion disk, with the photon ring contributing only some of the light, and therefore the photon ring does not completely and unambiguously determine the radius of the observed dark patch in the `photo​.’ In general, the patch could be considerably smaller than what is usually termed the `shadow’ of the black hole.

M87BH_Vicinity_Photo_2a.png

Figure 8: (Left) We probably observe the M87bh at a small angle off its south pole. Its accretion disk has an unknown size and shape — it may be quite thick and non-uniform — and it may not even lie at the black hole’s equator. The disk and the black hole interact to create outward-going jets of material (observed already many years ago but not clearly visible in the EHT ‘photo’.) (Right) The EHT `photo’ of the M87bh (taken in radio waves and shown in false color!) Compare with Figure 7; the most important difference is that one side of the image is brighter than the other. This likely arises from (a) our view being slightly off from the south pole, combined with (b) rotation of the black hole and its disk, and (c) possibly other more subtle issues.

This is important. The photon ring’s diameter, and thus the width of the `shadow’ too, barely depend on the rotation rate of the black hole; they depend almost exclusively on the black hole’s mass. So if the ring in the photo were simply the photon ring of the M87bh, you’d have a very simple way to measure the black hole’s mass without knowing its rotation rate: you’d look at how large the dark patch is, or equivalently, the diameter of the blurry ring, and that would give you the answer to within 10%. But it’s nowhere near so simple if the blurry ring shows the accretion disk, because the accretion disk’s properties and appearance can vary much more than the photon ring; they can depend strongly on the black hole’s rotation rate, and also on magnetic fields and other details of the black hole’s vicinity.

The Important Role of Rotation

If we conclude that EHT is seeing a mix of the accretion disk with the photon ring, with the former dominating the brightness, then this makes EHT’s measurement of the M87bh’s mass more confusing and even potentially suspect. Hence: controversy. Is it possible that EHT underestimated their uncertainties, and that their measurement of the black hole mass has more ambiguities, and is not as precise, as they currently claim?

Here’s where the rotation rate is important. Despite what I showed (for pedagogical simplicity) in Figure 7, for a non-rotating black hole the accretion disk’s central gap is actually expected to lie outside the photon ring; this is shown at the top of Figure 9.  But  the faster the black hole rotates, the smaller this central gap is expected to be, to the point that for a fast-rotating black hole the gap will lie inside the photon ring, as shown at the bottom of Figure 9. (This tendency is not obvious; it requires understanding details of the black hole geometry.) And if that is true, the dark patch in the EHT image may not be the black hole’s full shadow (i.e. quasi-silhouette), which is the region inside the photon ring. It may be just the inner portion of it, with the outer portion obscured by emission from the accretion disk.

The effect of blurring in the two cases of slow (or zero) and fast rotation are illustrated in Figure 9, where the photon ring’s size is taken to be the same in each case but the disk’s inner edge is close in or far out. (The black holes, not illustrated since they aren’t visible anyway, differ in mass by about 10% in order to have the photon ring the same size.) This shows why the size of the dark patch can be quite different, depending on the disk’s shape, even when the photon ring’s size is the same.

BHBlurDisk_a0_a1_3.png

Figure 9: Comparing the appearance of slightly more realistically-shaped disks around slowly rotating or non-rotating black holes (top) to those around fast-rotating black holes (bottom) of the same mass, as seen from the north or south pole. (Left) the view in a perfect camera; (right) rough illustration of the effect of blurring in the current version of the EHT. The faster the black hole is spinning, the smaller the central gap in the accretion disk is likely to be. No matter what the extent of the accretion disk (dark red), the photon ring (yellow) remains at roughly the same location, changing only by 10% between a non-rotating black hole and a maximally rotating black hole of the same mass. But blurring in the camera combines the disk and photon ring into a thick ring whose brightness is dominated by the disk rather than the ring, and which can therefore be of different size even though the mass is the same. This implies that the radius of the blurry ring in the EHT `photo’, and the size of the dark region inside it, cannot by themselves tell us the black hole’s mass; at a minimum we must also know the rotation rate (which we do not.)

Gralla et al. subtly raise these questions but are careful not to overstate their case, perhaps because they have not yet completed their study of rotating black holes. But the question is now in the air.

I’m interested to hear what the EHT folks have to say about it, as I’m sure they have detailed arguments in favor of their procedures. In particular, EHT’s simulations show all of the effects mentioned above; there’s none of this of which they are unaware. (In fact, the reason I know my illustrations above are reasonable is partly because you can see similar pictures in the EHT papers.) As long as the EHT folks correctly accounted for all the issues, then they should have been able to properly measure the mass and estimate their uncertainties correctly. In fact, they don’t really use the photo itself; they use more subtle techniques applied to their telescope data directly. Thus it’s not enough to argue the photo itself is ambiguous; one has to argue that EHT’s more subtle analysis methods are flawed. No one has argued that yet, as far as I am aware.

But the one thing that’s clear right now is that science writers almost uniformly got it wrong [because the experts didn’t explain these points well] when they tried to describe the image two months ago. The `photo’ probably does not show “a photon ring surrounding a shadow.” That would be nice and simple and impressive-sounding, since it refers to fundamental properties of the black hole’s warping effects on space. But it’s far too glib, as Figures 7 and 9 show. We’re probably seeing an accretion disk supplemented by a photon ring, all blurred together, and the dark region may well be smaller than the black hole’s shadow.

(Rather than, or in addition to, the accretion disk, it is also possible that the dominant emission in the photo comes from the inner portion of one of the jets that emerges from the vicinity of the black hole; see Figure 8 above. This is another detail that makes the situation more difficult to interpret, but doesn’t change the main point I’m making.)

Someday in the not distant future, improved imaging should allow EHT to separately image the photon ring and the disk, so both can be observed easily, as in the left side of Figure 9. Then all these questions will be answered definitively.

Why the Gargantua Black Hole from Interstellar is Completely Different

Just as a quick aside, what would you see if an accretion disk were edge-on rather than face-on? Then, in a perfect camera, you’d see something like the famous picture of Gargantua, the black hole from the movie Interstellar — a direct image of the front edge of the disk, and a strongly lensed indirect image of the back side of the disk, appearing both above and below the black hole, as illustrated in Figure 11. And that leads to the Gargantua image from the movie, also shown in Figure 11. Notice the photon ring (which is, as I cautioned you earlier, off-center!)   [Note added: this figure has been modified; in the original version I referred to the top and bottom views of the disk’s far side as the  “1st indirect image”, but as pointed out by Professor Jean-Pierre Luminet, that’s not correct terminology here.]

BHGarg4.png

Figure 10: The movie Interstellar features a visit to an imaginary black hole called Gargantua, and the simulated images in the movie (from 2014) are taken from near the equator, not the pole. As a result, the direct image of the disk cuts across the black hole, and indirect images of the back side of the disk are seen above and below the black hole. There is also a bright photon ring, slightly off center; this is well outside the surface of the black hole, which is not visible. A real image would not be symmetric left-to-right; it would be brighter on the side that is rotating toward the viewer.  At the bottom is shown a much more realistic visual image (albeit not so good quality) from 1994 by Jean-Alain Marck, in which this asymmetry can be seen clearly.

However, the movie image leaves out an important Doppler-like effect (which I’ll explain someday when I understand it 100%). This makes the part of the disk that is rotating toward us bright, and the part rotating away from us dim… and so a real image from this vantage point would be very asymmetric — bright on the left, dim on the right — unlike the movie image.  At the suggestion of Professsor Jean-Pierre Luminet I have added, at the bottom of Figure 10, a very early simulation by Jean-Alain Marck that shows this effect.

I mention this because a number of expert science journalists incorrectly explained the M87 image by referring to Gargantua — but that image has essentially nothing to do with the recent black hole `photo’. M87’s accretion disk is certainly not edge-on. The movie’s Gargantua image is taken from the equator, not from near the pole.

Final Remarks: Where a Rotating Black Hole Differs from a Non-Rotating One

Before I quit for the week, I’ll just summarize a few big differences for fast-rotating black holes compared to non-rotating ones.

1) As I’ve just emphasized, what a rotating black hole looks like to a distant observer depends not only on where the matter around the black hole is located but also on how the black hole’s rotation axis is oriented relative to the observer. A pole observer, an equatorial observer, and a near-pole observer see quite different things. (As noted in Figure 8, we are apparently near-south-pole observers for M87’s black hole.)

Let’s assume that the accretion disk lies in the same plane as the black hole’s equator — there are some reasons to expect this. Even then, the story is complex.

2) As I mentioned above, instead of a photon-sphere, there is a ‘photon-zone’ — a region where specially aimed photons can travel round the black hole multiple times. For high-enough spin (greater than about 80% of maximum as I recall), an accretion disk’s inner edge can lie within the photon zone, or even closer to the black hole than the photon zone; and this can cause a filling-in of the ‘shadow’.

3) Depending on the viewing angle, the indirect images of the disk that form the photon ring may not be a circle, and may not be concentric with the direct image of the disk. Only when viewed from along the rotation axis (i.e., above the north or south pole) will the direct and indirect images of the disk all be circular and concentric. We’re not viewing the M87bh on its axis, and that further complicates interpretation of the blurry image.

4) When the viewing angle is not along the rotation axis the image will be asymmetric, brighter on one side than the other. (This is true of EHT’s `photo’.) However, I know of at least four potential causes of this asymmetry, any or all of which might play a role, and the degree of asymmetry depends on properties of the accretion disk and the rotation rate of the black hole, both of which are currently unknown. Claims about the asymmetry made by the EHT folks seem, at least to me, to be based on certain assumptions that I, at least, cannot currently check.

Each of these complexities is a challenge to explain, so I’ll give both you and I a substantial break while I figure out how best to convey what is known (at least to me) about these issues.

by Matt Strassler at June 14, 2019 12:15 PM

June 11, 2019

Georg von Hippel - Life on the lattice

Looking for guest bloggers to cover LATTICE 2019
My excellent reason for not attending LATTICE 2018 has become a lot bigger, much better at many things, and (if possible) even more beautiful — which means I won't be able to attend LATTICE 2019 either (I fully expect to attend LATTICE 2020, though). So once again I would greatly welcome guest bloggers willing to cover LATTICE 2019; if you are at all interested, please send me an email and we can arrange to grant you posting rights.

by Georg v. Hippel (noreply@blogger.com) at June 11, 2019 10:28 AM

Georg von Hippel - Life on the lattice

Book Review: "Lattice QCD — Practical Essentials"
There is a new book about Lattice QCD, Lattice Quantum Chromodynamics: Practical Essentials by Francesco Knechtli, Michael Günther and Mike Peardon. At 140 pages, this is a pretty slim volume, so it is obvious that it does not aim to displace time-honoured introductory textbooks like Montvay and Münster, or the newer books by Gattringer and Lang or DeGrand and DeTar. Instead, as suggested by the subtitle "Practical Essentials", and as said explicitly by the authors in their preface, this book aims to prepare beginning graduate students for their practical work in generating gauge configurations and measuring and analysing correlators.

In line with this aim, the authors spend relatively little time on the physical or field-theoretic background; while some more advanced topics such as the Nielsen-Ninomiya theorem and the Symanzik effective theory are touched upon, the treatment of foundational topics is generally quite brief, and some topics, such as lattice perturbation theory or non-perturbative renormalization, are omitted altogether. The focus of the book is on Monte Carlo simulations, for which both the basic ideas and practically relevant algorithms — heatbath and overrelaxation for pure gauge fields, and hybrid Monte Carlo (HMC) for dynamical fermions — are described in some detail, including the RHMC algorithm and advanced techniques such as determinant factorizations, higher-order symplectic integrators, and multiple-timescale integration. The techniques from linear algebra required to deal with fermions are also covered in some detail, from the basic ideas of Krylov-space methods through concrete descriptions of the GMRES and CG algorithms, along with such important preconditioners as even-odd and domain decomposition, to the ideas of algebraic multigrid methods. Stochastic estimation of all-to-all propagators with dilution, the one-end trick and low-mode averaging are explained, as are techniques for building interpolating operators with specific quantum numbers, gauge link and quark field smearing, and the use of the variational method to extract hadronic mass spectra. Scale setting, the Wilson flow, and Lüscher's method for extracting scattering phase shifts are also discussed briefly, as are the basic statistical techniques for data analysis. Each chapter contains a list of references to the literature covering both original research articles and reviews and textbooks for further study.

Overall, I feel that the authors succeed very well at their stated aim of giving a quick introduction to the methods most relevant to current research in lattice QCD in order to let graduate students hit the ground running and get to perform research as quickly as possible. In fact, I am slightly worried that they may turn out to be too successful, since a graduate student having studied only this book could well start performing research, while having only a very limited understanding of the underlying field-theoretical ideas and problems (a problem that already exists in our field in any case). While this in no way detracts from the authors' achievement, and while I feel I can recommend this book to beginners, I nevertheless have to add that it should be complemented by a more field-theoretically oriented traditional textbook for completeness.

___
Note that I have deliberately not linked to the Amazon page for this book. Please support your local bookstore — nowadays, you can usually order online on their websites, and many bookstores are more than happy to ship books by post.

by Georg v. Hippel (noreply@blogger.com) at June 11, 2019 10:27 AM

June 10, 2019

Matt Strassler - Of Particular Significance

Minor Technical Difficulty with WordPress

Hi all — sorry to bother you with an issue you may not even have noticed, but about 18 hours ago a post of mine that was under construction was accidentally published, due to a WordPress bug.  Since it isn’t done yet, it isn’t readable (and has no figures yet) and may still contain errors and typos, so of course I tried to take it down immediately.  But it seems some of you are still getting the announcement of it or are able to read parts of it.  Anyway, I suggest you completely ignore it, because I’m not done working out the scientific details yet, nor have I had it checked by my more expert colleagues; the prose and perhaps even the title may change greatly before the post comes out later this week.  Just hang tight and stay tuned…

by Matt Strassler at June 10, 2019 11:43 PM

Matt Strassler - Of Particular Significance

The Black Hole Photo: Controversy Begins To Bubble Up

It’s been a couple of months since the `photo’ (a false-color image created to show the intensity of radio waves, not visible light) of the the black hole at the center of the galaxy M87, taken by the Event Horizon Telescope (EHT) collaboration, was made public.  Before it was shown, I wrote an introductory post explaining what the ‘photo’ is and isn’t.  There I cautioned readers that I thought it might be difficult to interpret the image, and controversies about it might erupt. This concern seems to have been warranted.  This is the first post of several in which I’ll explain the issue as I see it.

So far, the claim that the image shows the vicinity of M87’s black hole (which I’ll call `M87bh’ for short) has not been challenged, and I’m not expecting it to be. But what and where exactly is the material that is emitting the radio waves and thus creating the glow in the image? And what exactly determines the size of the dark region at the center of the image? That’s been a problematic issue from the beginning, but discussion is starting to heat up.  And it’s important: it has implications for the measurement of the black hole’s mass, and of any attempt to estimate its rotation rate.

Over the last few weeks I’ve spent some time studying the mathematics of spinning black holes, talking to my Harvard colleagues who are world’s experts on the relevant math and physics, and learning from colleagues who produced the `photo’ and interpreted it.  So I think I can now clearly explain what most journalists and scientist-writers (including me) got wrong at the time of the photo’s publication, and clarify what the photo does and doesn’t tell us.

[I am heavily indebted to Harvard postdocs Alex Lupsasca and Shahar Hadar for assisting me as I studied the formulas and concepts relevant for fast-spinning black holes. Much of what I learned comes from early 1970s papers, especially those by my former colleague Professor Jim Bardeen (see this one written with Press and Teukolsky), and from papers written in the last couple of years, especially this one by my present and former Harvard colleagues.]

What does the EHT Image Show?

Scientists understand the black hole itself — the geometric dimple in space and time — pretty well.  If one knows the mass and the rotation rate of the black hole, and assumes Einstein’s equations for gravity are mostly correct (for which we have considerable evidence, for example from LIGO measurements and elsewhere), then the equations tell us what the black hole does to space and time and how its gravity works.

But for the `photo’, ​that’s not enough information.  We don’t get to observe black hole itself (it’s black, after all!)   What the `photo’ shows is a blurry ring of radio waves, emitted from hot material (mostly electrons and protons) somewhere around the black hole — material whose location, velocity, and temperature we do not know. That material and its emission of radio waves are influenced by powerful gravitational forces (whose details depend on the rotation rate of the M87bh, which we don’t know yet) and powerful magnetic fields (whose details we hardly know at all.)  The black hole then bends the paths of the radio waves extensively, even more than does a glass lens, so that where things appear in the image is not where they are actually located.

The only insights we have into this extreme environment come from computer simulations and a few other `photos’ at lower magnification. The simulations are based on well-understood equations, but the equations have to be solved approximately, using methods that may or may not be justified. And the simulations don’t tell you where the matter is; they tell you where the material will go, but only after you make a guess as to where it is located at some initial point in time.  (In the same sense: computers can predict the national weather tomorrow only when you tell them what the national weather was yesterday.) No one knows for sure how accurate or misleading these simulations might be; they’ve been tested against some indirect measurements, but no one can say for sure what flaws they might have.

However, there is one thing we can certainly say, and a paper by Gralla, Holz and Wald has just said it publicly.

When the EHT `photo’ appeared, it was widely reported that it shows the image of a photon sphere at the edge of the shadow (or ‘quasi-silhouette‘, a term I suggested as somewhat less misleading) of the M87bh.

[Like the Earth’s equator, the photon sphere is a location, not an object.  Photons (the particles that make up light, radio waves, and all other electromagnetic radiation) that move along the photon sphere have special, spherical orbits around the black hole.]

Unfortunately, it seems likely that these statements are incorrect; and Gralla et al. have said almost as much in their new preprint (though they were careful not to make a precise claim.)

 

The Photon Sphere Doesn’t Exist

Indeed, if you happened to be reading my posts carefully back then, you probably noticed that I was quite vague about the photon-sphere — I never defined precisely what it was.  You would have been right to read this as a warning sign, for indeed I wasn’t getting clear explanations of it from anyone. A couple of weeks later, as I studied the equations and conversed with colleagues, I learned why; for a rotating black hole, the photon sphere doesn’t really exist.  There’s a broad photon-zone' where photons can have special orbits, but you won't ever see the whole photon zone in an image of a rotating black hole.  Instead a piece of the photon zone will show up asphoton ring, a bright thin loop of radio waves.

But this ring is not the edge of anything spherical, is generally not perfectly circular, and is not even perfectly centered on the black hole.

… and the Photon Ring Isn’t What We See…

It seems likely that the M87bh is rotating quite rapidly, so it probably has no photon-sphere.  But does it show a photon ring?  Although some of the EHT folks seemed to suggest the answer was ‘yes’, Gralla et al. suggest the answer is likely `no’ (and my Harvard colleagues were finding the same thing.)  It seems unlikely that the circlet of radio waves that appears in the EHT `photo’ is really an image of M87bh’s photon ring anyway; it’s probably something else.  That’s where controversy starts.

…so the Dark Patch is Probably Not the Full Shadow

The term shadow' is confusing (which is why I prefer quasi-silhouette’) but no matter what you call it, in its ideal form it is a perfectly dark area whose edge is the photon ring.    But in reality the perfectly dark area need not appear so dark after all; it may be filled in by various effects.  Furthermore, since the `photo’ may not show us the photon ring, it’s far from clear that the dark patch in the center is the full shadow anyway.

Step-By-Step Approach

To explain these points will take some time and care, so I’m going to spread the explanation out over several blog posts.  Otherwise it’s just too much information too fast, and I won’t do a good job writing it down.  So bear with me… expect at least three more posts, probably four, and even then there will still be important issues to return to in future.

The Appearance of a  Black Hole With Nearby Matter

Because fast-rotating black holes are complicated, I’m going to illuminate the controversy using a non-rotating black hole’s properties, which is also what Gralla et al. mainly do in their paper. It turns out the qualitative conclusion drawn from the non-rotating case largely applies in the rotating case too, at least in the case of the M87BH as seen from our perspective; that’s important because the M87BH is probably rotating at a very good clip. (At the end of this post I’ll briefly describe some key differences between the appearance of non-rotating black holes, rotating black holes observed along the rotation axis, and rotating black holes observed just a bit off the rotation axis.)

A little terminology first: for a rotating black hole there’s a natural definition of the poles and the equator, just as there is for the Earth: there’s an axis of rotation, and the poles are where that axis intersects with the black hole horizon. The equator is the circle that lies halfway between the poles. For a non-rotating black hole, there’s no such axis and no such automatic definition, but it will be useful to define the north pole of the black hole to be the point on the horizon closest to us.

A Single Source of Electromagnetic Waves

Let’s imagine placing a bright light bulb on the same plane as the equator, outside the black hole horizon but rather close to it. (The bulb could emit radio waves or visible light or any other form of electromagnetic waves, at any frequency; for what I’m about to say, it doesn’t matter at all, so I’ll just call it `light’.)  See Figure 1.  Where would the light from the bulb go?

Some of it, heading inward, ends up in the black hole, while some of it heads outward toward distant observers. The gravity of the black hole will bend the path of the light. And here’s something remarkable: a small fraction of the light, aimed just so, can actually spiral around the black hole any number of times before heading out. As a result, you will see the bulb not once but multiple times!

There will be a direct image — light that comes directly to us — from near the bulb’s true location (displaced because gravity bends the light a bit, just as a glass lens will distort the appearance of what’s behind it.) That’s the orange arrow in Figure 1.  But then there will be an indirect image from light that goes halfway (the green arrow in Figure 1) around the black hole before heading in our direction; we will see that image of the bulb on the opposite side of the black hole. Let’s call that the `first indirect image.’ Then there will be a second indirect image from light that orbits the black hole once and comes out near the direct image, but further out; that’s the blue arrow in Figure 1. Then there will be a third indirect image from light that goes around one and a half times (not shown), and so on. Figure 1 shows the paths of the direct, first indirect, and second indirect images of the bulb as they head toward our location at the top of the image.

What you can see in Figure 1 is that both the first and second indirect images are formed by light (er, radio waves) that spends part of its time close to a special radius around the back hole, shown as a dotted line. This, in the case of a non-rotating black hole, is an honest “photon-sphere”.

In the case of a rotating black hole, something very similar happens when you’re looking at the black hole from its north pole; there’s a special circle then too.  But that circle is not the edge of a photon-sphere!  In general, photons can orbit in a wide region, which I’ll call the “photon-zone.” You’ll see photons from other parts of the photon zone if you look at the black hole not from the north pole but from some other angle.

What our radio-wave camera will see, looking at what is emitted from the light bulb, is shown in Figure 2: an infinite number of increasingly squished `indirect’ images, half on one side of the black hole near the direct image, and the other half on the other side. What is not obvious, but true, is that only the first of the indirect images is bright; this is one of Gralla et al’s main points. We can, therefore, separate the images into the direct image, the first indirect image, and the remaining indirect images. The total amount of light coming from the direct image and the first indirect image can be large, but the total amount of light from the remaining indirect images is typically (according to Gralla et al.) less than 5% of the light from the first indirect image. And so, unless we have an extremely high resolution camera, we’ll never pick those other images up up. Consequently, all we can really hope to detect with something like EHT is the direct image and the first indirect image.

WARNING (since this seems to be a common confusion even after two months):

IN ALL MY FIGURES IN THIS POST, AS IN THE BLACK HOLE `PHOTO’ ITSELF, THE COLORS OF THE IMAGE ARE CHOSEN ARBITRARILY (as explained in my first blog post on this subject.) THE `PHOTO’ WAS TAKEN AT A SINGLE, NON-VISIBLE FREQUENCY OF ELECTROMAGNETIC WAVES: EVEN IF WE COULD SEE THAT TYPE OF RADIO WAVE WITH OUR EYES, IT WOULD BE A SINGLE COLOR, AND THE ONLY THING THAT WOULD VARY ACROSS THE IMAGE IS BRIGHTNESS. IN THIS SENSE, A BLACK AND WHITE IMAGE MIGHT BE CLEARER CONCEPTUALLY, BUT IT IS HARDER FOR THE EYE TO PROCESS.

A Circular Source of Electromagnetic Waves

Let’s replace our ordinary bulb by a circular bulb (Figure 3), again set somewhat close to the horizon, sitting in the plane that contains the equator. What would we see now? Figure 4: The direct image is a circle (looking somewhat larger than it really is); outside it sits the first indirect image of the ring; and then come all the other indirect images, looking quite dim and all piling up at one radius. We’re going to call all those piled-up images the “photon ring”.

Importantly, if we replace that circular bulb [shown yellow in Figure 5] by one of a larger or smaller radius [shown blue in Figure 5], then (Figure 6) the inner direct image would look larger or smaller to us, but the indirect images would barely move. They remain very close to the same size no matter how big a circular bulb we chose would barely move!

A Disk as a Source of Electromagnetic Waves

And what if you replaced the circular bulb with a disk-shaped bulb, a sort of glowing pancake with a circular hole at its center, as in Figure 7? That’s relevant because black holes are thought to have `accretion disks’ of material (possibly quite thick — I’m showing a very thin one for illustration, but they can be as thick as the black hole is wide, or even thicker) that orbit them. The accretion disk may be the source of the radio waves at M87’s black hole. Well, we can think of the disk as many concentric circles of light placed together. The direct images of the disk (shown on one side of the disk as an orange wash) would form a disk in your camera (Figure 8); the hole at its center would appear larger than it really is due to the bending caused by the black hole’s gravity, but the shape would be the same. However, the indirect images (the first of which is shown going halfway about the black hole as a green wash) would all pile up in the same place from your perspective, forming a bright and quite thin ring. This is the photon ring for a non-spinning black hole — the full set of indirect images of everything that lies at or inside the photon sphere but outside the black hole horizon.

[Gralla et al. call the first indirect image the `lensed ring’ and the remaining indirect images, completely unobservable at EHT, the `photon ring’. I don’t know if their notation will be adopted but you might hear `lensed ring’ referred to in future. In any case, what EHT calls the photon ring includes what Gralla et al. call the lensed ring.]

So the conclusion is that if we had a perfect camera, the direct image of a disk makes a disk, but the indirect images (mainly just the first one, as Gralla et al. emphasize) make a bright, thin ring that may be superposed upon the direct image of the disk, depending on the disk’s shape.

And this conclusion, with some important adjustments, applies also for a spinning black hole viewed from above its north or south pole — its axis of rotation — or from near that axis; I’ll mention the adjustments in a moment.

But EHT is not a perfect camera. To make the black hole image, it had to be pushed to its absolute limits.  Someday we’ll see both the disk and the ring, but right now, they’re all blurred together.  So which one is more important?

From a Blurry Image to Blurry Knowledge

What does a blurry camera do to this simple image? You might think that the disk is so dim that the camera will mainly show you a blurry image of the bight photon ring. But that’s wrong. The ring isn’t bright enough. A simple calculation reveals that blurring the ring makes it dimmer than the disk! The photo, therefore, will show mainly the accretion disk, not the photon ring! This is shown in Figure 9, which you can compare with the Black Hole `photo’ (Figure 10).  (Figure 9 is symmetric around the ring, but the photo is not, for multiple reasons — rotation, viewpoint off the rotation axis, etc. — which I’ll have to defer til another post.)

More precisely, the ring and disk blur together, but the image is dominated by the disk.

Let’s say that again: the black hole `photo’ is likely showing the accretion disk, with the photon ring contributing only some of the light, and therefore the photon ring does not completely and unambiguously determine the radius of the observed dark patch in the `photo​.’  In general, the patch may well be smaller than what is usually termed the `shadow’ of the black hole.

This is very important. The photon ring’s radius barely depend on the rotation rate of the black hole, and therefore, if the light were coming from the ring, you’d know (without knowing the black hole’s rotation right) how big its dark patch will appear for a given mass. You could therefore use the radius of the ring in the photo to determine the black hole’s mass. But the accretion disk’s properties and appearance can vary much more. Depending on the spin of the black hole and the details of the matter that’s spiraling in to the black hole, its radius can be larger or smaller than the photon ring’s radius… making the measurement of the mass both more ambiguous and — if you partially mistook the accretion disk for the photon ring — potentially suspect. Hence: controversy. Is it possible that EHT underestimated their uncertainties, and that their measurement of the black hole mass has more ambiguities, and is not as precise, as they currently claim?

Here’s where the rotation rate is important.  For a non-rotating black hole the accretion disk’s inner edge is expected to lie outside the photon ring, but for a fast-rotating black hole (as M87’s may well be), it will lie inside the photon ring. And if that is true, the dark patch in the EHT image may not be the black hole’s full shadow (i.e. quasi-silhouette). It may be just the inner portion of it, with the outer portion obscured by emission from the accretion disk.

Gralla et al. subtly raise these questions but are careful not to overstate their case, because they have not yet completed their study of rotating black holes. But the question is now in the air. I’m interested to hear what the EHT folks have to say about it, as I’m sure they have detailed arguments in favor of their procedures.

(Rather than the accretion disk, it is also possible that the dominant emission comes from the inner portion of one of the jets that emerges from the vicinity of the black hole. This is another detail that makes the situation more difficult to interpret, but doesn’t change the main point I’m making.)

Why the Gargantua Black Hole From Interstellar is Completely Different

Just as a quick aside, what would you see if an accretion disk were edge-on rather than face-on? Then, in a perfect camera, you’d see something like the famous picture of Gargantua, the black hole from the movie Interstellar — a direct image of the front edge of the disk, and a strongly lensed indirect image of the back side of the disk, appearing both above and below the black hole, as illustrated in Figure 11.

One thing that isn’t included in the Gargantua image from the movie (Figure 12) is a sort of Doppler effect (which I’ll explain someday when I understand it 100%). This makes the part of the disk that is rotating toward us bright, and the part rotating away from us dim… and so the image will be very asymmetric, unlike the movie image. See Figure 13 with what it would really `look’ like to the EHT.

I mention this because a number of expert science journalists incorrectly explained the M87 image by referring to Gargantua — but that image has essentially nothing to do with the recent black hole `photo’. M87’s accretion disk is certainly not edge-on. The movie’s Gargantua image is taken from the equator, not from near the pole, and does not show the Doppler effect correctly (for artistic reasons).

Where a Rotating Black Hole Differs

Before I quit for the day, I’ll just summarize a few big differences for fast-rotating black holes compared to non-rotating ones.

1) What a rotating black hole looks like to a distant observe depends not only on where the matter around the black hole is located but also on how the black hole’s rotation axis is oriented relative to the observer. A north-pole observer, an equatorial observer, and a near-north-pole observer see quite different things. (We are apparently near-south-pole observers for M87’s black hole.)

Let’s assume that the accretion disk lies in the same plane as the black hole’s equator — there are reasons to expect this. Even then, the story is complex.

2) Instead of a photon-sphere, there is what you might call a `photon-zone’ — a region where specially aimed photons can travel round the black hole multiple times. As I mentioned above, for high-enough spin (greater than about 80% of maximum as I recall), an accretion disk’s inner edge can lie within the photon zone, or even closer to the black hole than the photon zone; this leads to multiple indirect images of the disk and a potentially bright photon ring.

3) However, depending on the viewing angle, the indirect images of the disk that form the photon ring may not be a circle, and may not be concentric with the direct image of the disk. Only when viewed from points along the rotation axis (i.e., above the north or south pole) will the direct and indirect image of the disk both be circular and concentric. That further complicates interpretation of the blurry image.

4) When the viewing angle is not along the rotation axis the image will be asymmetric, brighter on one side than the other. (This is true of EHT’s `photo’.) However, I know of at least four potential causes of this asymmetry, any or all of which might play a role, and the degree of asymmetry depends on properties of the accretion disk and the rotation rate of the black hole, both of which are currently unknown. Claims about the asymmetry made by the EHT folks seem, at least to me, to be based on certain assumptions that we cannot currently check.

Each of these complexities is a challenge to explain, so I’ll give both you and I a substantial break while I figure out how best to convey what is known (at least to me) about these issues.

by Matt Strassler at June 10, 2019 04:04 AM

June 05, 2019

Clifford V. Johnson - Asymptotia

News from the Front, XVII: Super-Entropic Instability

I'm quite excited because of some new results I got recently, which appeared on the ArXiv today. I've found a new (and I think, possibly important) instability in quantum gravity.

Said more carefully, I've found a sibling to Hawking's celebrated instability that manifests itself as black hole evaporation. This new instability also results in evaporation, driven by Hawking radiation, and it can appear for black holes that might not seem unstable to evaporation in ordinary circumstances (i.e., there's no Hawking channel to decay), but turn out to be unstable upon closer examination, in a larger context. That context is the extended gravitational thermodynamics you've read me talking about here in several previous posts (see e.g. here and here). In that framework, the cosmological constant is dynamical and enters the thermodynamics as a pressure variable, p. It has a conjugate, V, which is a quantity that can be derived once you know the pressure and the mass of the black hole.

Well, Hawking evaporation is a catastrophic quantum phenomenon that follows from the fact that the radiation temperature of a Schwarzschild black hole (the simplest one you can think of) goes inversely with the mass. So the black hole radiates and loses energy, reducing its mass. But that means that it will radiate at even higher temperature, driving its mass down even more. So it will radiate even more, and so on. So it is an instability in the sense that the system drives itself even further away from where it started at every moment. Like a pencil falling over from balancing on a point.

This is the original quantum instability for gravitational systems. It's, as you probably know, very important. (Although in our universe, the temperature of radiation is so tiny for astrophysical black holes (they have large mass) that the effect is washed out by the local temperature of the universe... But if the univverse ever had microscopic black holes, they'd have radiated in this way...)

So very nice, so very 1970s. What have I found recently?

A nice way of expressing the above instability is to simply say [...] Click to continue reading this post

The post News from the Front, XVII: Super-Entropic Instability appeared first on Asymptotia.

by Clifford at June 05, 2019 02:11 AM

May 27, 2019

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

A conference in Paris

This week I’m in Paris, attending a conference in memory of the outstanding British astronomer and theoretician Arthur Stanley Eddington. The conference, which is taking place at the Observatoire de Paris, is designed to celebrate the centenary of Eddington’s famous measurement of the bending of distant starlight by the sun.  a key experiment that offered important early support for Einstein’s general theory of relativity. However, there are talks on lots of different topics, from Eddington’s philosophy of science to his work on the physics of stars, from his work in cosmology to his search for a unified field theory. The conference website and programme is here.

IMG_2761

The view from my hotel in Denfert-Rochereau

All of the sessions of the conference were excellent, but today was a particular treat with four outstanding talks on the 1919 expedition. In ‘Eddington, Dyson and the Eclipse of 1919’, Daniel Kennefick of the University of Arkansas gave a superb overview of his recent book on the subject. In ‘The 1919 May 29 Eclipse: On Accuracy and Precision’, David Valls-Gabaud of the Observatoire de Paris gave a forensic analysis of Eddington’s calculations. In ‘The 1919 Eclipse; Were the Results Robust?’ Gerry Gilmore of the University of Cambridge described how recent reconstructions of the expedition measurements gave confidence in the results; and in ‘Chasing Mare’s Nests ; Eddington and the Early Reception of General Relativity among Astronomers’, Jeffrey Crelinsten of the University of Toronto summarized the doubts expressed by major American astronomical groups in the early 1920s, as described in his excellent book.

Image result for no shadow of a doubt by daniel kennefick        Image result for einstein's jury

I won’t describe the other sessions, but just note a few things that made this conference the sort of meeting I like best. All speakers were allocated the same speaking time (30 mins including questions); most speakers were familiar with each other’s work; many speakers spoke on the same topic, giving different perspectives; there was plenty of time for further questions and comments at the end of each day. So a superb conference organised by Florian Laguens of the IPC and David Valls-Gabaud of the Observatoire de Paris.

IMG_2742

On the way to the conference

In my own case, I gave a talk on Eddington’s role in the discovery of the expanding universe. I have long been puzzled by the fact that Eddington, an outstanding astronomer and strong proponent of the general theory of relativity, paid no attention when his brilliant former student Georges Lemaître suggested that a universe of expanding universe could be derived from general relativity, a phenomenon that could account for the redshifts of the spiral nebulae, the biggest astronomical puzzle of the age. After considering some standard explanations (Lemaître’s status as an early-career researcher, the journal he chose to publish in and the language of the paper), I added two considerations of my own: (i) the theoretical analysis in Lemaître’s 1927 paper would have been very demanding for a 1927 reader and (ii) the astronomical data that Lemaître relied upon were quite preliminary (Lemaître’s calculation of a redshift/distance coefficient for the nebulae relied upon astronomical distances from Hubble that were established using the method of apparent magnitude, a method that was much less reliable than Hubble’s later observations using the method of Cepheid variables).

IMG_2759

Making my points at the Eddington Conference

It’s an interesting puzzle because it is thought that Lemaitre sent a copy of his paper to Eddington in 1927 – however I finished by admitting that there is a distinct possibility that Eddington simply didn’t take the time to read his former student’s paper. Sometimes the most boring explanation is the right one! The slides for my talk can be found here.

All in all, a superb conference.

 

by cormac at May 27, 2019 07:39 PM

May 24, 2019

Clifford V. Johnson - Asymptotia

News from the Front, XVI: Toward Quantum Heat Engines

(The following post is a bit more technical than usual. But non-experts may still find parts helpful.)

A couple of years ago I stumbled on an entire field that I had not encountered before: the study of Quantum Heat Engines. This sounds like an odd juxtaposition of terms since, as I say in the intro to my recent paper:

The thermodynamics of heat engines, refrigerators, and heat pumps is often thought to be firmly the domain of large classical systems, or put more carefully, systems that have a very large number of degrees of freedom such that thermal effects dominate over quantum effects. Nevertheless, there is thriving field devoted to the study—both experimental and theoretical—of the thermodynamics of machines that use small quantum systems as the working substance.

It is a fascinating field, with a lot of activity going on that connects to fields like quantum information, device physics, open quantum systems, condensed matter, etc.

Anyway, I stumbled on it because, as you may know, I've been thinking (in my 21st-meets-18th century way) about heat engines a lot over the last five years since I showed how to make them from (quantum) black holes, when embedded in extended gravitational thermodynamics. I've written it all down in blog posts before, so go look if interested (here and here).

In particular, it was when working on a project I wrote about here that I stumbled on quantum heat engines, and got thinking about their power and efficiency. It was while working on that project that I had a very happy thought: Could I show that holographic heat engines (the kind I make using black holes) -at least a class of them- are actually, in some regime, quantum heat engines? That would be potentially super-useful and, of course, super-fun.

The blunt headline statement is that they are, obviously, because every stage [...] Click to continue reading this post

The post News from the Front, XVI: Toward Quantum Heat Engines appeared first on Asymptotia.

by Clifford at May 24, 2019 05:16 PM

May 14, 2019

Axel Maas - Looking Inside the Standard Model

Acquiring a new field
I have recently started to look into a new field: Quantum gravity. In this entry, I would like to write a bit about how this happens, acquiring a new field. Such that you can get an idea what can lead a scientist to do such a thing. Of course, in future entries I will also write more about what I am doing, but it would be a bit early to do so right now.

Acquiring a new field in science is not something done lightly. One has always not enough time for the things one does already. And when you enter a new field, stuff is slow. You have to learn a lot of basics, need to get an overview of what has been done, and what is still open. Not to mention that you have to get used to a different jargon. Thus, one rarely does so lightly.

I have in the past written already one entry about how I came to do Higgs physics. This entry was written after the fact. I was looking back, and discussed my motivation how I saw it at that time. It will be an interesting thing to look back at this entry in a few years, and judge what is left of my original motivation. And how I feel about this knowing what happened since then. But for now, I only know the present. So, lets get to it.

Quantum gravity is the hypothetical quantum version of the ordinary theory of gravity, so-called general relativity. However, it has withstood quantization for a quite a while, though there has been huge progress in the last 25 years or so. If we could quantize it, its combination with the standard model and the simplest version of dark matter would likely be able to explain almost everything we can observe. Though even then a few open questions appear to remain.

But my interest in quantum gravity comes not from the promise of such a possibility. It has rather a quite different motivation. My interest started with the Higgs.

I have written many times that we work on an improvement in the way we look at the Higgs. And, by now, in fact of the standard model. In what we get, we see a clear distinction between two concepts: So-called gauge symmetries and global symmetries. As far as we understand the standard model, it appears that global symmetries determine how many particles of a certain type exists, and into which particles they can decay or be combined. Gauge symmetries, however, seem to be just auxiliary symmetries, which we use to make calculations feasible, and they do not have a direct impact on observations. They have, of course, an indirect impact. After all, in which theory which gauge symmetry can be used to facilitate things is different, and thus the kind of gauge symmetry is more a statement about which theory we work on.

Now, if you add gravity, the distinction between both appears to blur. The reason is that in gravity space itself is different. Especially, you can deform space. Now, the original distinction of global symmetries and gauge symmetries is their relation to space. A global symmetry is something which is the same from point to point. A gauge symmetry allows changes from point to point. Loosely speaking, of course.

In gravity, space is no longer fixed. It can itself be deformed from point to point. But if space itself can be deformed, then nothing can stay the same from point to point. Does then the concept of global symmetry still make sense? Or does all symmetries become just 'like' local symmetries? Or is there still a distinction? And what about general relativity itself? In a particular sense, it can be seen as a theory with a gauge symmetry of space. Makes this everything which lives on space automatically a gauge symmetry? If we want to understand the results of what we did in the standard model, where there is no gravity, in the real world, where there is gravity, then this needs to be resolved. How? Well, my research will hopefully answer this question. But I cannot do it yet.

These questions were already for some time in the back of my mind. A few years, I actually do not know how many exactly. As quantum gravity pops up in particle physics occasionally, and I have contact with several people working on it, I was exposed to this again and again. I knew, eventually, I will need to address it, if nobody else does. So far, nobody did.

But why now? What prompted me to start now with it? As so often in science, it were other scientists.

Last year at the end of November/beginning of December, I took part in a conference in Vienna. I had been invited to talk about our research. The meeting has a quite wide scope, and also present were several people, who work on black holes and quantum physics. In this area, one goes, in a sense, halfway towards quantum gravity: One has quantum particles, but they life in a classical gravity theory, but with strong gravitational effects. Which is usually a black hole. In such a setup, the deformations of space are fixed. And also non-quantum black holes can swallow stuff. This combination appears to make the following thing: Global symmetries appear to become meaningless, because everything associated with them can vanish in the black hole. However, keeping space deformations fixed means that local symmetries are also fixed. So they appear to become real, instead of auxiliary. Thus, this seems to be quite opposite to our result. And this, and the people doing this kind of research, challenged my view of symmetries. In fact, in such a half-way case, this effect seems to be there.

However, in a full quantum gravity theory, the game changes. Then also space deformations become dynamical. At the same time, black holes need no longer to have the characteristic to swallow stuff forever, because they become dynamical, too. They develop. Thus, to answer what happens really requires full quantum gravity. And because of this situation, I decided to start to work actively on quantum gravity. Because I needed to answer whether our picture of symmetries survive, at least approximately, when there is quantum gravity. And to be able to answer such challenges. And so it began.

Within the last six months, I have now worked through a lot of the basic stuff. I have now a rough idea of what is going on, and what needs to be done. And I think, I see a way how everything can be reconciled, and make sense. It will still need a long time to complete this, but I am very optimistic right now. So optimistic, in fact, that a few days back I gave my first talk, in which I discussed this issues including quantum gravity. It will still need time, before I have a first real result. But I am quite happy how thing progress.

And that is the story how I started to look at quantum gravity in earnest. If you want to join me in this endeavor: I am always looking for collaboration partners and, of course, students who want to do their thesis work on this subject 😁

by Axel Maas (noreply@blogger.com) at May 14, 2019 03:03 PM

May 12, 2019

Marco Frasca - The Gauge Connection

Is it possible to get rid of exotic matter in warp drive?

On 1994, Miguel Alcubierre proposed a solution of the Einstein equations (see here) describing a space-time bubble moving at arbitrary speed. It is important to notice that no violation of the light speed limit happens because is the space-time moving and inside the bubble everything goes as expected. Miguel AlcubierreThis kind of solutions of the Einstein equations have a fundamental drawback: they violate Weak Energy Condition (WEC) and, in order to exist, some exotic matter with negative energy density must exist. Useless to say, nobody has ever seen such kind of matter. There seems to exist some clue in the way Casimir effect works but this just relies on the way one interprets quantum fields rather than an evidence of existence. Besides, since the initial proposal, a great number of studies have been published showing how pathological the Alcubierre’s solution can be, also recurring to quantum field theory (e.g. Hawking radiation). So, we have to turn to dream of a possible interstellar travel hoping that some smart guy will one day come out with a better solution.

Of course, Alcubierre’s solution is rather interesting from a physical point of view as it belongs to a number of older solutions, likeKip Thorne wormholes, time machines and like that, yielded by very famous authors as Kip Thorne, that arise when one impose a solution and then check the conditions of its existence. This turns out to be a determination of the energy-momentum tensor and, unavoidably, is negative. Then, they violate whatever energy condition of the Einstein equations granting pathological behaviour. On the other side, they appear the most palatable for science fiction of possible futures of space and time travels. In these times where this kind of technologies are largely employed by the film industry, moving the fantasy of millions, we would hope that such futures should also be possible.

It is interesting to note the procedure to obtain these particular solutions. One engineers it on a desk and then substitute them into the Einstein equations to see when are really a solution. One fixes in this way the energy requirements. On the other side, it is difficult to come out from the blue with a solution of the Einstein equations that provides such a particular behaviour, moving the other way around. It is also possible that such solutions are not possible and imply always a violation of the energy conditions. Some theorems have been proved in the course of time that seem to prohibit them (e.g. see here). Of course, I am convinced that the energy conditions must be respected if we want to have the physics that describes our universe. They cannot be evaded.

So, turning at the question of the title, could we think of a possible warp drive solution of the Einstein equations without exotic matter? The answer can be yes of course provided we are able to recover the York time, or warp factor, in the way Alcubierre obtained it with its pathological solution. At first, this seems an impossible mission. But the space-time bubble we are considering is a very small perturbation and perturbation theory can come to rescue. Particularly, when this perturbation can be locally very strong. On 2005, I proposed such a solution (see here) together with a technique to solve the Einstein equations when the metric is strongly perturbed. My intent at that time was to give a proof of the BKL conjecture. A smart referee suggested to me to give an example of application of the method. The metric I have obtained in this way, perturbing a Schwarzaschild metric, yields a solution that has an identical York time (warp factor) as for the Alcubierre’s metric. Of course, I am respecting energy conditions as I am directly solving the Einstein equations that do.

The identity between the York times can be obtained provided the form factor proposed by Alcubierre is taken to be 1 but this is just the simplest case. Here is an animation of my warp factor.

Warp factor

It seen the bubble moving as expected along the x direction.

My personal hope is that this will go beyond a mathematical curiosity. On the other side, it should be understood how to provide such kind of perturbations to a given metric. I can think to the Einstein-Maxwell equations solved using perturbation theory. There is a lot of literature about and a lot of great contributions on this argument.

Finally, this could give a meaning to the following video by NASA.

by mfrasca at May 12, 2019 05:59 PM

April 30, 2019

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

A Week at The Surf Experience

I don’t often take a sun holiday these days, but I had a fabulous time last week at The Surf Experience in Lagos, Portugal. I’m not an accomplished surfer by any measure, but there is nothing quite like the thrill of catching a few waves in the sea with the sun overhead – a nice change from the indoors world of academia.

Not for the first time, I signed up for a residential course with The Surf Experience in Lagos. Founded by veteran German surfer Dago Lipke, guests of The Surf Experience stay at the surf lodge Vila Catarina, a lovely villa in the hills above Lagos, complete with beautiful gardens and swimming pool. Sumptuous meals are provided by Dagos’s wife Connie, a wonderful cook. Instead of wandering around town trying to find a different restaurant every evening, guests enjoy an excellent meal in a quiet setting in good company, followed by a game of pool or chess. And it really is good company. Guests at TSE tend mainly to hail from Germany and Switzerland, with a sprinkling from France and Sweden, so it’s truly international – quite a contrast to your average package tour (or indeed our college staff room). Not a mention of Brexit, and an excellent opportunity to improve my German. (Is that what you tell yourself?- Ed)

IMG_2637 (1)

Hanging out at the pool before breakfast

IMG_2634

Fine dining at The Surf Experience

IMG_2624

A game of cards and a conversation instead of a noisy bar

Of course, no holiday is perfect and in this case I managed to pick up an injury on the first day. Riding the tiniest wave all the way back to the beach, I got unexpectedly thrown off, hitting my head off the bottom at speed. (This is the most elementary error you can make in surfing and it risks serious injury, from concussion to spinal fracture). Luckily, I walked away with nothing more than severe bruising to the neck and chest (as later established by X-ray at the local medical clinic, also an interesting experience). So no life-altering injuries, but like a jockey with a broken rib, I was too sore to get back on the horse for few days. Instead, I tried Stand Up Paddling for the first time, which I thoroughly enjoyed. It’s more exciting than it looks, must get my own board for calm days at home.

E6Jc2LvY

Stand Up Paddling in Lagos with Kiteschool Portugal

Things got even better towards the end of the week as I began to heal. Indeed, the entire surf lodge had a superb day’s surfing yesterday on beautiful small green waves at a beach right next to town (in Ireland, we very rarely see clean conditions like this, the surf is mainly driven by wind). It was fantastic to catch wave after wave throughout the afternoon, even if clambering back on the board after each wasn’t much fun for yours truly.

This morning, I caught a Ryanair flight back to Dublin from Faro, should be back in the office by late afternoon. Oddly enough, I feel enormously refreshed – perhaps it’s the feeling of gradually healing. Hopefully the sensation of being continuously kicked in the ribs will disappear soon and I’ll be back on the waves in June. In the meantime, this week marks a study period for our students before their exams, so it’s an ideal time to prepare my slides for the Eddington conference in Paris later this month.

Update

I caught a slight cold on the way back, so today I’m wandering around college like a lunatic going cough, ‘ouch’ , sneeze, ‘ouch’.  Maybe it’s karma for flying Ryanair – whatever about indulging in one or two flights a year, it’s a terrible thing to use an airline whose CEO continues to openly deny the findings of climate scientists.

 

by cormac at April 30, 2019 09:49 PM

April 24, 2019

Andrew Jaffe - Leaves on the Line

Spring Break?

Somehow I’ve managed to forget my usual end-of-term post-mortem of the year’s lecturing. I think perhaps I’m only now recovering from 11 weeks of lectures, lab supervision, tutoring alongside a very busy time analysing Planck satellite data.

But a few weeks ago term ended, and I finished teaching my undergraduate cosmology course at Imperial, 27 lectures covering 14 billion years of physics. It was my fourth time teaching the class (I’ve talked about my experiences in previous years here, here, and here), but this will be the last time during this run. Our department doesn’t let us teach a course more than three or four years in a row, and I think that’s a wise policy. I think I’ve arrived at some very good ways of explaining concepts such as the curvature of space-time itself, and difficulties with our models like the 122-or-so-order-of-magnitude cosmological constant problem, but I also noticed that I wasn’t quite as excited as in previous years, working up from the experimentation of my first time through in 2009, putting it all on a firmer foundation — and writing up the lecture notes — in 2010, and refined over the last two years. This year’s teaching evaluations should come through soon, so I’ll have some feedback, and there are still about six weeks until the students’ understanding — and my explanations — are tested in the exam.

Next year, I’ve got the frankly daunting responsibility of teaching second-year quantum mechanics: 30 lectures, lots of problem sheets, in-class problems to work through, and of course the mindbending weirdness of the subject itself. I’d love to teach them Dirac’s very useful notation which unifies the physical concept of quantum states with the mathematical ideas of vectors, matrices and operators — and which is used by all actual practitioners from advanced undergraduates through working physicists. But I’m told that students find this an extra challenge rather than a simplification. Comments from teachers and students of quantum mechanics are welcome.

by Andrew at April 24, 2019 01:19 AM

April 23, 2019

Georg von Hippel - Life on the lattice

Looking for guest blogger(s) to cover LATTICE 2018
Since I will not be attending LATTICE 2018 for some excellent personal reasons, I am looking for a guest blogger or even better several guest bloggers from the lattice community who would be interested in covering the conference. Especially for advanced PhD students or junior postdocs, this might be a great opportunity to get your name some visibility. If you are interested, drop me a line either in the comment section or by email (my university address is easy to find).

by Georg v. Hippel (noreply@blogger.com) at April 23, 2019 01:18 PM

April 06, 2019

Andrew Jaffe - Leaves on the Line

@TheMekons make the world alright, briefly, at the 100 Club, London.

by Andrew at April 06, 2019 10:17 AM

March 31, 2019

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

My favourite conference; the Institute of Physics Spring Weekend

This weekend I attended the annual meeting of the Institute of Physics in Ireland. I always enjoy these meetings – more relaxing than a technical conference and a great way of keeping in touch with physicists from all over the country. As ever, there were a number of interesting presentations, plenty of discussions of science and philosophy over breakfast, lunch and dinner, all topped off by the annual awarding of the Rosse Medal, a highly competitive competition for physics postgraduates across the nation.

banner

The theme of this year’s meeting was ‘A Climate of Change’ and thus the programme included several talks on the highly topical subject of anthropogenic climate change. First up was ‘The science of climate change’, a cracking talk on the basic physics of climate change by Professor Joanna Haigh of Imperial College London. This was followed by ‘Climate change: where we are post the IPCC report and COP24’, an excellent presentation by Professor John Sweeney of Maynooth University on the latest results from the IPCC. Then it was my turn. In ‘Climate science in the media – a war on information?’,  I compared the coverage of climate change in the media with that of other scientific topics such as medical science and and big bang cosmology. My conclusion was that climate change is a difficult subject to convey to the public, and matters are not helped by actors who deliberately attempt to muddle the science and downplay the threat. You can find details of the full conference programme here and the slides for my own talk are here.

 

Images of my talk from IoP Ireland 

There followed by a panel discussion in which Professor Haigh, Professor Sweeney and I answered questions from the floor on climate science. I don’t always enjoy panel discussions, but I think this one was useful thanks to some excellent chairing by Paul Hardaker of the Institute of Physics.

IMG_2504 (1)

Panel discussion of the threat of anthopogenic climate change

After lunch, we were treated to a truly fascinating seminar: ‘Tropical storms, hurricanes, or just a very windy day?: Making environmental science accessible through Irish Sign Language’, by Dr Elizabeth Mathews of Dublin City University, on the challenge of making media descriptions of threats such as storms hurricanes and climate change accessible to deaf people. This was followed by a most informative talk by Dr Bajram Zeqiri of the National Physical Laboratory on the recent redefinition of the kilogram,  ‘The measure of all things: redefinition of the kilogram, the kelvin, the ampere and the mole’.

Finally, we had the hardest part of the day, the business of trying to select the best postgraduate posters and choosing a winner from the shortlist. As usual, I was blown away by the standard, far ahead of anything I or my colleagues ever produced. In the end, the Rosse Medal was awarded to Sarah Markham of the University of Limerick for a truly impressive poster and presentation.

D25jKhvXcAE8vdA

Viewing posters at the IoP 2019 meeting; image courtesy of IoP Ireland

All in all, another super IoP Spring weekend. Now it’s back to earth and back to teaching…

by cormac at March 31, 2019 08:51 PM

March 29, 2019

Robert Helling - atdotde

Proving the Periodic Table
The year 2019 is the International Year of the Periodic Table celebrating the 150th anniversary of Mendeleev's discovery. This prompts me to report on something that I learned in recent years when co-teaching "Mathematical Quantum Mechanics" with mathematicians in particular with Heinz Siedentop: We know less about the mathematics of the periodic table) than I thought.



In high school chemistry you learned that the periodic table comes about because of the orbitals in atoms. There is Hundt's rule that tells you the order in which you have to fill the shells in and in them the orbitals (s, p, d, f, ...). Then, in your second semester in university, you learn to derive those using Sehr\"odinger's equation: You diagonalise the Hamiltonian of the hyrdrogen atom and find the shells in terms of the main quantum number $n$ and the orbitals in terms of the angular momentum quantum number $L$ as $L=0$ corresponds to s, $L=1$ to p and so on. And you fill the orbitals thanks to the Pauli excursion principle. So, this proves the story of the chemists.

Except that it doesn't: This is only true for the hydrogen atom. But the Hamiltonian for an atom nuclear charge $Z$ and $N$ electrons (so we allow for ions) is (in convenient units)
$$ a^2+b^2=c^2$$

$$ H = -\sum_{i=1}^N \Delta_i -\sum_{i=1}^N \frac{Z}{|x_i|} + \sum_{i\lt j}^N\frac{1}{|x_i-x_j|}.$$

The story of the previous paragraph would be true if the last term, the Coulomb interaction between the electrons would not be there. In that case, there is no interaction between the electrons and we could solve a hydrogen type problem for each electron separately and then anti-symmetrise wave functions in the end in a Slater determinant to take into account their Fermionic nature. But of course, in the real world, the Coulomb interaction is there and it contributes like $N^2$ to the energy, so it is of the same order (for almost neutral atoms) like the $ZN$ of the electron-nucleon potential.

The approximation of dropping the electron-electron Coulomb interaction is well known in condensed matter systems where there resulting theory is known as a "Fermi gas". There it gives you band structure (which is then used to explain how a transistor works)


Band structure in a NPN-transistor
Also in that case, you pretend there is only one electron in the world that feels the periodic electric potential created by the nuclei and all the other electrons which don't show up anymore in the wave function but only as charge density.

For atoms you could try to make a similar story by taking the inner electrons into account by saying that the most important effect of the ee-Coulomb interaction is to shield the potential of the nucleus thereby making the effective $Z$ for the outer electrons smaller. This picture would of course be true if there were no correlations between the electrons and all the inner electrons are spherically symmetric in their distribution around the nucleus and much closer to the nucleus than the outer ones.  But this sounds more like a day dream than a controlled approximation.

In the condensed matter situation, the standing for the Fermi gas is much better as there you could invoke renormalisation group arguments as the conductivities you are interested in are long wave length compared to the lattice structure, so we are in the infra red limit and the Coulomb interaction is indeed an irrelevant term in more than one euclidean dimension (and yes, in 1D, the Fermi gas is not the whole story, there is the Luttinger liquid as well).

But for atoms, I don't see how you would invoke such RG arguments.

So what can you do (with regards to actually proving the periodic table)? In our class, we teach how Lieb and Simons showed that in the $N=Z\to \infty$ limit (which in some sense can also be viewed as the semi-classical limit when you bring in $\hbar$ again) that the ground state energy $E^Q$ of the Hamiltonian above is in fact approximated by the ground state energy $E^{TF}$ of the Thomas-Fermi model (the simplest of all density functional theories, where instead of the multi-particle wave function you only use the one-particle electronic density $\rho(x)$ and approximate the kinetic energy by a term like $\int \rho^{5/3}$ which is exact for the three fermi gas in empty space):

$$E^Q(Z) = E^{TF}(Z) + O(Z^2)$$

where by a simple scaling argument $E^{TF}(Z) \sim Z^{7/3}$. More recently, people have computed more terms in these asymptotic which goes in terms of $Z^{-1/3}$, the second term ($O(Z^{6/3})= O(Z^2)$ is known and people have put a lot of effort into $O(Z^{5/3})$ but it should be clear that this technology is still very very far from proving anything "periodic" which would be $O(Z^0)$. So don't hold your breath hoping to find the periodic table from this approach.

On the other hand, chemistry of the periodic table (where the column is supposed to predict chemical properties of the atom expressed in terms of the orbitals of the "valence electrons") works best for small atoms. So, another sensible limit appears to be to keep $N$ small and fixed and only send $Z\to\infty$. Of course this is not really describing atoms but rather highly charged ions.

The advantage of this approach is that in the above Hamiltonian, you can absorb the $Z$ of the electron-nucleon interaction into a rescaling of $x$ which then let's $Z$ reappear in front of the electron-electron term as $1/Z$. Then in this limit, one can try to treat the ugly unwanted ee-term perturbatively.

Friesecke (from TUM) and collaborators have made impressive progress in this direction and in this limit they could confirm that for $N < 10$ the chemists' picture is actually correct (with some small corrections). There are very nice slides of a seminar talk by Friesecke on these results.

Of course, as a practitioner, this will not surprise you (after all, chemistry works) but it is nice to know that mathematicians can actually prove things in this direction. But it there is still some way to go even 150 years after Mendeleev.

by Unknown (noreply@blogger.com) at March 29, 2019 11:02 AM

Subscriptions

Feeds

[RSS 2.0 Feed] [Atom Feed]


Last updated:
September 16, 2019 04:05 PM
All times are UTC.

Suggest a blog:
planet@teilchen.at