Heisenberg’s final theory

Last month, I was at Foundations 2018 in Utrecht. It is one of the biggest conferences on the foundations of physics, bringing together physicists, philosophers, and historians of science. A talk I found particularly interesting was that of Alexander Blum, from the Max Planck Institute for the History of Science, entitled Heisenberg’s 1958 Weltformel & the roots of post-empirical physics. Let me briefly summarize Blum’s fascinating story.

In 1958, Werner Heisenberg put forward a new theory of matter that, according to his peers (and to every physicist today) could not possibly be correct, failing to reproduce most known microscopic phenomena. Yet he firmly believed in it, worked on it restlessly (at least for a while), and presented it to the public as a major breakthrough. How was such an embarrassment possible given that Heisenberg was one of the brightest physicists of the time? One could try to find the answer in Heisenberg’s personal shortcomings, in his psychology, in his age, perhaps even in his deluded attempt at making a comeback after the sad episode of his work on the nazi Uranprojekt during World War II. Blum’s point is that the answer lies, rather, in the very peculiar nature of modern physical theories, where mathematical constraints strongly guide theory building.

Heisenberg’s theorizing was allowed by the strong constraints that quantum field theory (QFT) puts on consistency. His goal was to find the ultimate theory not with the help of empirical results (like those coming from early colliders), but from pure theory, with one principle in addition to those of QFT. His idea was to ask for radical monism: deep down, there has to be just one fundamental featureless particle. It has to be spin 1/2 so that integer spin particles can be effectively obtained as bound states. The only non-trivial option is then to add a non-renormalizable quartic interaction term to the Dirac free Lagrangian.

heinsenberg_weltformel

Heisenberg’s Weltformel at the 1958 Planck Centenary in West Berlin. Source: DPA

With only a single fundamental self-interacting spin 1/2 particle, the theory seems far removed from the physics we know. Sure, it could be that all the physics we know, with leptons, hadrons, and electromagnetic forces, could be obtained effectively, from non-trivial bound states made from this fundamental particle. It could be, but most likely it is not the case and so Heisenberg’s crazy conjectures should be easy to disprove. But here comes the catch: the theory is non-renormalizable, and there existed no reliable way to extract predictions from it at the time. It is impossible to falsify something that is not even consistent in the weakest sense available. Heisenberg could argue: maybe the theory is just non-renormalizable at the perturbative level, maybe the singular behavior of the propagator is just a feature of the free theory… Heisenberg could exploit the fact that there were strong doubts about the consistency of QFT anyway, with the Landau pole, and Dyson’s argument about the necessary divergence of perturbative expansions.

Interestingly, it is partly to conclusively disprove Heisenberg’s proposal that rigorous approaches to quantum field theory were developed. Working at the same institute as Heisenberg but deeply skeptical of his theory, Harry Lehman, Kurt Symanzik, and Wolfhart Zimmermann laid the basis of axiomatic field theory. The Källen-Lehmann  (K-L) spectral representation theorem, showing as a corollary that an interacting propagator cannot be more regular than a free propagator, provided a no-go theorem disproving Heisenberg’s speculations.

But Heisenberg could fight back. It was understood at the time that Quantum Electrodynamics contained (at least in some formulations) quantum states with negative norm, the so called “ghosts”. Maybe such ghosts could be exploited to bypass the K-L theorem, yielding cancellations of divergences in the expression of the interacting propagator. This speculation lead to an intense fight with Wolfgang Pauli in 1957, the “battle of Ascona”. Pauli argued that ghosts, if exploited in this fashion, would never “stay in the bottle”, and would necessarily make the theory inconsistent. After 6 weeks of intense work, Heisenberg came up with a toy model combining a unitary S-matrix (hence consistent in the sense required) and containing ghosts.

So Heisenberg’s theory was not easy to unequivocally kill, which of course does not make it correct. Heisenberg tried extracting predictions from his theory using new (unreliable) approximation methods, giving essentially random results. Hence he had no option but to fall back on beauty, the only justification for his theory being its radically simple starting point. Nothing ever came from his line of research which no one ever pursued after him. Blum ended his talk with a timely warning: One still needs to beware of falling into Heisenberg’s trap.

In a previous post, I made the simple point that theoretical physicists put too much trust in notions of beauty and mathematical simplicity because of survival bias: we remember the few instances in which it worked, but forget the endless list of neat constructions by excellent physicists that eventually proved empirically inadequate. I did not know of Heisenberg’s theory, but I gladly add it to the list.

Blum’s talk was a teaser for an article he told me is about to be finished. More generally, his study of Heisenberg’s Weltformel is the first step in an inquiry into theorist’s attempts at coming up with a theory of everything from post-empirical arguments (see a well explained description of his group program). This is a timely research program.

One does not need to think too hard to see the obvious parallel between Heisenberg’s story and current attempts at coming up with a theory of everything (or of quantum gravity). One easily finds popular theories that are not manifestly fitting known physics but also not obviously not fitting known physics. They could be correct, but we cannot know for lack of proper non-perturbative tools. Should we trust them only because they are so hard to conclusively disprove and obey some (quite subjectively) appealing principles?

Through two doors at once

I have really enjoyed Anil Ananthaswamy’s latest book: Through two doors at once: The Elegant Experiment That Captures the Enigma of Our Quantum Reality. It is very well written and one reads through it like a novel. But, most importantly, it gets the physics right, and the subtleties are not washed away with metaphors. Accurate and captivating, the book strikes a balance rarely reached in popular science books.

The foundations of quantum mechanics is a difficult branch of physics, and almost every narrative shortcut that was invented to convey its subtlety is, strictly speaking, a bit wrong. Further, foundations is an unfinished branch of physics: different group of experts disagree about what the main message of quantum mechanics is and what should be done to make progress in understanding. This makes it hard to popularize the subject without writing incorrect platitudes or pushing one orthodoxy.

Anil’s strategy is to use the simplest experiment illustrating quantum phenomena: the double slit experiment. He discusses the results and shows why they are so counter-intuitive. However, the simple double slit experiment is not enough to go to the bottom of the mystery. Anil thus very progressively refines the experimental setup to gradually add the subtleties that prevent naive stories from explaining away the weirdness of quantum theory. As in a police investigation, Anil interviews the experts of the main interpretations of quantum mechanics, and guides the reader through the explanations they give for each setup. The reader can then decide for herself which story she finds most appealing.

Crucially, I think the different interpretations are presented fairly. Anil does not take a side. I personally much prefer “non-romantic and realist” interpretations of quantum theory: I find accounts of the world where stuff simply moves, be it with non-local laws of motion, far more convincing than alternatives (where there are infinitely many worlds, or where “reality” has a subjective nature). The “realist” view is well represented in the book (which is rare, because it is not “hype”), but I was not annoyed by the thorough discussion of the other possibilities. More radical proponents of one or the other interpretation may however be annoyed by this attempted neutrality.

amazon_coverAnil’s writing style is very enjoyable. He does not make the all too common mistake of using cheap metaphors which are dangerous in the context of quantum mechanics where they provide a deceiving impression of depth and understanding. In this book, you actually learn something. Sure you do not become an expert in foundations, but you get an accurate sense of what motivates researchers in the field. This is both nice in itself, and if you want to keep on digging with a more specialized book. Even though I already knew the technical content of the book, I found the inquiry captivating. I definitely recommend Through two doors at once, especially to my friends and family who want to quickly yet genuinely understand the sorts of questions that drive me.

Disclaimer: I have provided minor help for the rereading of an almost finished draft of the book.

Survival bias and the non-empirical confirmation of physical theories

Survival Bias

Survivorship-bias

drawing by McGeddon

During World War II, the US military did statistics to see where its bombers got primarily damaged. The pattern looked like the picture on the right. The first intuition of the engineers was to reinforce the parts that were hit the most. Abraham Wald, a statistician, realized that the distribution of impacts was observed only for the airplanes that actually came back from combat. Those that were hit somewhere else could probably not even make it back home. Hence it is precisely where the planes seemed to be the least damaged that adding reinforcements was the most useful! This famous story illustrates the problem of survival bias (or survivorship bias).

Definition (Wikipedia): Survival bias is the logical error of concentrating on the people or things that made it past some selection process and overlooking those that did not.

Survival bias is the reason why we tend to believe in planned obsolescence and more generally why we sometimes have the nostalgia of a golden age that never existed. “Back in the days, cars and refrigerators were reliable, unlike today! And back then, buildings were beautiful and lasted forever unlike the crap they construct today!”

But actually none of this is true. Most refrigerators from the sixties stopped working in a few years and the very few that still function today are just in the 0.1% that made it. The same goes for cars which are more reliable than they used to be: the vintage cars we see  around show an impressive number of kilometers, but only because they are part of the infinitesimal fraction that miraculously survived. Finally, most buildings in earlier centuries were poorly constructed, lacking both taste and resistance. Most of them collapsed or got destroyed and this is why new buildings now stand in place of them. The few old monuments that remain are still there precisely because they were particularly beautiful and well constructed for the time. More generally, the remnants of the past we see in our everyday life are not a fair sample of what life used to be. They are, with rare exceptions, the only things that were good enough to not be replaced.

survivorship_bias_2x

from the great xkcd

Survival bias can explain an impressively wide range of phenomena. For example, most Hedge Funds show stellar historical returns (even after fees) while investing in hedge funds is not profitable on average. This is easy to understand if hedge funds simply have random returns: the hedge funds that lose money after a period go bankrupt or have to downsize for lack of investors while hedge funds that made money survive and increase in size. The same bias explains why the tech success stories are often overrated and why it seems cats do not get more injured when they fall from a higher altitude (wikipedia).

This bias very often misleads us in our daily life. My worry is that it may also mislead us in our assessment of physical theories, especially when we lack experimental data. To understand why, I need to discuss the problem of the “non-empirical confirmation” of physical theories.

Non-empirical confirmation of physical theories

Physicists always use some form of non-empirical assessment of physical theories. Most theories never get the chance to be explicitly falsified experimentally and are just abandoned for non-empirical reasons: it is just impossible to make computations with them or they turn out to violate principles we thought should be universal. As the time between the invention of new physical theories and their possible experimental test widens, it becomes important to know more precisely what non-empirical reasons we use to temporarily trust theories. The current situation of String Theory, which predicts new physics that seems untestable in the foreseeable future, is a prime example of this need.

This is a legitimate question that motivated a conference in Munich about two years ago “Why trust a theory? Reconsidering scientific methodology in light of modern physics“, which was then actively discussed and reported on online (see e.g. Massimo PigliucciPeter Woit and Sabine Hossenfelder). Among the speakers was philosopher Richard Dawid, who has come up with a theory (or formalization) of non-empirical confirmation in Physics, notably in the book String Theory and the Scientific Method.

Dawid contends that physicists so far use primarily the following criteria to assess physical theories in the absence of empirical confirmation:

  • Elegance and beauty,
  • Gut feelings (or the gut feelings of famous people),
  • Mathematical fertility.

I think Dawid is unfortunately correct in this first analysis. The reasons why physicists momentarily trust theories before they can be empirically probed are largely subjective and sociological. This anecdote recalled by Alain Connes in an interview about 10 years ago is quite telling:

“How can it be that you attended the same talk in Chicago and you left before the end and now you really liked it. The guy was not a beginner and was in his forties, his answer was ‘Witten was seen reading your book in the library in Princeton’.”

Note that this does not mean that science is a mere social construct: this subjectivity only affects the transient regime when “anything goes”, before theories can be practically killed or vindicated by facts. Yet, it means there is at least room for improvement in this transient theory building phase.

Dawid puts forward 3 principles, which I will detail below, to more rigorously ground the assessment of physical theories in the absence of experimental data. Before going any further I have to clarify what we may expect from non-empirical confirmation. There is a weak form: we mean by non-empirical confirmation simply a small improvement in the fuzzily defined Bayesian prior we have that a theory will turn out to be correct. This is the uncontroversial understanding of non-empirical confirmation, but one that Dawid deems too weak. There is also a strong form, where “confirmation” is understood in its non-technical sense, that of definitely validating the theory without even requiring experimental evidence. This one, which some high energy theorists might sometimes foolishly defend, is manifestly too strong. Part of the controversy around non-empirical confirmation is that Dawid wants something stronger than the weak form (which would be trivial in his opinion) but weaker than the strong form (which would be simply wrong). However, because it is quite difficult to understand precisely where this sweet spot would lie, Dawid has often been caricatured as defending an unacceptably strong form of his theory.

What we may expect from non-empirical hints is an important question and I will come back to it later. Right now, I ask: can we find good guides to increase our chances to stay on the right track whilst experiments are still out of reach?

Dawid’s principles

  1. No Alternative Argument (aka “only game in town”):
    Physicists tried hard to find alternatives, they did not succeed.
    ~
  2. Meta-Inductive Argument:
    Theories with the same characteristics (obeying the same heuristic principles) proved successful in the past.
    ~
  3. Unexpected Explanatory Interconnections:
    The theory was developed to solve a problem but surprisingly solves other problems it was not meant to.

These principles are manifestly crafted with String Theory in mind for which they seem to fit perfectly. String Theory is not the only game in town but it is arguably more developed than the alternatives (and arguably more developed than some alternatives I find interesting). String Theory also fares well on the Meta Inductive Argument: it uses extensively the ideas and principles that made the success of previous theories, especially those of quantum field theory. In the course of the development of String Theory, a lot of unexpected interconnections also emerged. Many of them are internal to the theory: different formulations of String Theory actually seem to describe different limits of the same thing. But there are also unexpected byproducts: e.g. a theory constructed to deal with the strong nuclear force ends up containing gravitational physics.

At this stage, one may be tempted to nitpick and find good reasons why String Theory does not actually satisfy Dawid’s principles, possibly to defend one’s alternative theory. However, I am not sure this is a good line of defense and think it draws the attention away from the interesting question: independently of String Theory, are Dawid’s principles a good way to get closer to the truth?

Naive meta check

We may do a first meta check of Dawid’s principle, i.e. ask the question:

Would these principles have worked in the past?
or
Would they have guided us to what we now know are viable theories?

We may carry this meta check on the Standard Model of particle physics (an instance of Quantum Field Theory) and General Relativity.

At first sight, both theories fare pretty well. It seems that quantum field theory quickly became the main tool to describe fundamental particles while being the simplest extension of the principles that were previously successful (quantum mechanics and special relativity). Further, quantum field theoretic techniques unexpectedly applied to a wide range of problems including classical statistical mechanics. General relativity also seemed like it was the only game in town, minimally extending the earlier principles of special relativity introduced by Einstein. The question of the origins of the universe, which General Relativity did not primarily aim to answer, was also unexpectedly brought from metaphysics to the realm of physics. I chose simple examples, but it seems that for these two theories, there are plenty of details which fit into the 3 guiding principles proposed by Dawid. The latter look almost tailored to get the maximum number of points in the meta check game.

Fooled by survival bias

As convincing as it may seem, the previous meta check is essentially useless. It shows that successful theories indeed fit Dawid’s principles. But we have looked only at the very small subset of successful theories. It does not tell thus that following the principles would have led us to successful theories rather than unsuccessful ones. In the previous assessment, we were being dangerously fooled by survival bias. We looked at the path ultimately taken in the tree of possibilities, focusing on its characteristics, but forgetting that what matters is rather the difference with other possible paths.

To really meta check Dawid’s principles, it is important to study failures as well: the theories that looked promising but then were disproved and ultimately forgotten. For obvious reasons, such theories are no longer taught thus all too easy to overlook.

A brief History of failures

Let us start our brief History of promising-theories-that-failed by Nordström’s gravity. This theory is slightly anterior to General Relativity and was proposed by Gunnar Nordström in 1913 (with crucial improvements by Einstein, Laue, and others; see Norton for its fascinating history). It is built upon the same fundamental principles as General Relativity and differs only subtly in its field equations. Mainly, General Relativity is a tensor theory of gravity, in that the Einstein’s tensor G_{\mu\nu}= R_{\mu\nu} - \frac{1}{2} R g_{\mu\nu} is proportional to the matter stress-energy tensor T_{\mu\nu}:

R_{\mu\nu} - \frac{1}{2} R g_{\mu\nu} = \frac{8 \pi G}{c^4} T_{\mu\nu}

Nordström’s theory is a simpler scalar theory of gravity. The curvature R is sourced by the trace T:=T_{\mu}^\mu of the stress-energy tensor. This field equation is insufficient to fully fix the metric and one just adds the constraint that the Weyl tensor C_{abcd} is zero:

R = \frac{24  \pi G}{c^4} T
C=0

This makes Nordström’s theory arguably mathematically neater than Einstein’s theory. Further, while it brings all the modern features of metric theories of gravity, its prediction are in many cases quantitatively closer to the predictions of Newton’s theory. Finally, for two years, it was the only game in town as Einstein’s tensor theory was not yet finished.

But Nordström’s theory predicts no light deflection by gravitational fields and the wrong value (by a factor -\frac{1}{6}) for the advance of the perihelia of Mercury. These experimental results were not known in 1913. If we had had to compare Nordström’s and Einstein’s theories with Dawid’s principles, I think we would have hastily given Nordström the win.

Another example of a promising theory that was ultimately falsified is the SU(5) Grand Unified theory, proposed by Georgi and Glashow in 1974. The idea is to embed the Gauge groups U(1)\times SU(2) \times SU(3) of the Standard Model into the simple Gauge group SU(5). In this theory, the 3 (non gravitational) forces are the low energy manifestations of a single force. Going towards greater unification had been a successful way to proceed, from Maxwell’s fusion of electric and magnetic phenomena to Glashow-Salam-Weinberg’s electroweak unification. Further, the introduction of a simple Gauge group mimics earlier approaches successfully applied to quarks and the strong interaction. The theory of Georgi and Glashow seems to leverage the unreasonable effectiveness of mathematics (coined by Wigner) in its purest form.

The SU(5) Grand Unified Theory predicts that protons can decay and have a lifetime of ~10^{31} years. The Super-Kamiokande detector in Japan has looked for such events, without success: if protons actually decay, they do so at least a thousand times too rarely to be compatible with SU(5) theory. Despite the early enthusiasm and its high score at the non-empirical confirmation game, this theory is now falsified.

Physics is full of such examples of theoretically appealing yet empirically inadequate ideas. We may mention also Kaluza-Klein type theories unifying gauge theories and gravity, S-matrix approaches to the understanding of fundamental interactions, and Einstein’s and Schrödinger’s attempts at unified theories. We can probably add many supersymmetric extensions of the Standard Model to this list given the recent LHC null results. In many cases, we have theories that fit Dawid’s principles even better than our currently accepted theories, but that nonetheless fail experimental tests. The Standard Model and General Relativity do pretty well in the non-empirical confirmation game, but they would have been beaten by many alternative proposals. Only experiments allowed to choose the right yet not-so-beautiful path.

Conclusion

Looking at failed theories makes Dawid’s principles seem less powerful than a test on a surviving subset. But I do not have a proposal to improve on them. It may very well be that they are the best one can get: perhaps we just cannot expect too much from non-empirical principles. In the end, I am not sure we can defend more than the weakest meaning of non-empirical confirmation: a slight improvement of an anyway fuzzily defined Bayesian prior.

Looking at modern physics, we see an extremely biased sample of theories: they are the old fridge that is still miraculously working. Their success may very well be more contingent than we think.

I think this calls for more modesty and open-mindedness from theorists. In light of the historically mixed record of non-empirical confirmation principles, we should be careful not to put too much trust in neat but untested constructions and remain open to alternatives.

Theorists often behave like deluded zealots, putting an absurdly high level of trust in their models and the principles on which they are built. While it may be efficient to obtain funding, it is suboptimal to understand Nature. Theoretical physicists too can be fooled by survival bias.

This post is a write up of a talk I gave for an informal seminar at MPQ a few months ago. As main reference, I have used Dawid’s article The Significance of Non-Empirical Confirmation in Fundamental Physics (arXiv:1702.01133) which is Dawid’s contribution to the “Why trust a theory” conference.

Self-promotion

Anil Ananthaswamy has written a very nice piece for New Scientist about semiclassical gravity. It deals with recent attempts (to which I have taken part) in trying to make sense of a theory in which gravity is fundamentally classical.

screennewscientist

The article is a bit too kind and likely gives me a more central place in this adventure than I actually deserve. Nonetheless, the article contains the right amount of qualifiers and lets the skeptics speak. I understand them all too well: our approach is clearly not without flaws. It is more a counterexample to pessimistic views about semiclassical gravity than a believable proposal for a theory of everything. And I would not be surprised if it were to be falsified in the near future. But as Lajos is quoted saying at the end: “we must explore”.

Around the end of the article, Carlo Rovelli says he gives gravity 99% chance of being quantum. There, I think he is being a bit overconfident about the path he and his collaborators are pursuing, although his skepticism about our own work is again warranted.  Are the reasons why we think gravity should be quantum so strong? I am not sure, after all we know very little about gravity (see this recent essay). If gravity is not semiclassical in the way we have proposed, it could be in many others. Fortunately this question is answerable and will not require a particle accelerator the size of the Milky Way. If gravity is not quantum, this proposed experiment (which I had advertised here) will see it. Meanwhile, we have to remain open.

Spacetime essay

I have finally decided to put on arXiv a slightly remastered version (with figures) of my submission to the Beyond Spacetime essay contest. I have slightly tamed the provocative tone but it remains a bit rough. I still post it because I think it puts together many arguments I often make informally in seminars.

The winning essays took a slightly less head on approach to the subject of the contest, which was “Space and Time After Quantum Gravity“:

  • Why Black Hole Information Loss is Paradoxical, by David Wallace, arXiv
  • Problems with the cosmological constant problem, by Adam Koberinski, PhilSci

Wallace’s article is an answer to Tim Maudlin’s article I had mentioned here, but I have not read it yet.

Podcast on quantum mechanics

I have been interviewed by Vincent Debierre (in French) for Liberté Académique. 

Dans ce podcast nous parlons principalement de mécanique quantique et un peu de sa popularisation. Si je suis peut-être un peu provocateur dans ma critique des effets de manche de certains physicien, je crois ne pas avoir dit trop de bêtises. Je me suis trompé sur une date : l’axiomatisation de la mécanique quantique due à Von Neumann date de 1932 et non de 1926 (qui correspond à la “première vague”, avec Heisenberg, Born et Jordan, suivie de la formulation de Dirac en 1930).

Grâce à Thomas Leblé, j’ai retrouvé un brouillon que j’avais écrit il y a 3 ans sur ce que je comprenais des difficultés de la mécanique quantique et qui fait echo à ce que je raconte dans ce podcast. En le relisant, je suis étonné d’être assez d’accord avec ce que j’avais écrit à l’époque. Naturellement, il y a pas mal de formulations un peu malheureuses (notamment je pense qu’on peut supprimer “réaliste” dans “réaliste local”), ainsi que pas mal de fautes d’orthographes. Ceci dit je préfère laisser le texte momentanément dans son jus, avec ses problèmes, plutôt que de passer un temps indéterminé à le retravailler sans que jamais personne ne puisse le lire. Bref, vous pouvez regarder si ça vous intéresse, en gardant à l’esprit que c’est un vieux brouillon.

Status of the article pipeline

Ideas have a long way to go before they become articles. Usually for me the first step is to write down the computations and main arguments on paper. Then I write down a better typeset draft and iterate on it until it looks like a decent preprint. I then put it on arXiv, hoping to gather some feedback. Then I further iterate on the draft, rewrite things, correct mistakes, and send it to a first journal. The first journal typically rejects it but hopefully I get constructive reports. So I go to a second journal or a third and an article usually quite different from the original idea gets published. In the process, my ego gets shattered but the article becomes, I think, substantially better. This process is very long and so many articles are at different stages in the “pipeline” at the same time. This week and the last, I have made a little bit of progress emptying the pipeline.

1) Exact signal correlators in continuous quantum measurement

This is a new preprint I am quite happy about. In continuous quantum measurement, the objective is usually to reconstruct a continuous quantum trajectory \rho_t from a noisy continuous measurement readout I_t. People often make the confusing remark that the  quantum trajectory \rho_t is not directly “measurable”, it is just reconstructed from a model. This is misleading. One can do projective measurements every time the state reaches a given value \rho_t = \sigma and then check that the statistics obtained do match the theoretical prediction from continuous measurement theory. Nonetheless there is a valid point which is that this is inconvenient and that to validate the model or measure its parameters, it would be more convenient to talk only about the statistics of the continuous measurement readout. So instead of using the theory to reconstruct the state \rho_t, we can use the theory to compute the statistics of the signal I_t. This would allow read the free parameters of the model from “directly” obtained quantities. This point is not from me and has been made recently notably in a recent preprint by Atalaya et al. . In this article, the authors compute the n-point correlation function of the signal for qubits with a method that is (or at least seems) ad hoc. Reading this preprint, I remembered that I knew how to compute the n-point functions in full generality. I just had never understood that it could be useful. My only fear is that the result is known but buried in the Russian literature of the nineties. In that case, it would be the end of the journey for this preprint in the pipeline.

2) Binding quantum matter and spacetime, without romanticism

This is an essay I had written for the “Beyond spacetime” essay contest. In the absence of empirical evidence, I am defending semiclassical gravity as a sober and metaphysically sounder alternative to quantum gravity. While most people advocating semiclassical gravity criticize the cheap rebuttals made against it, I have tried to also be constructive, pushing the explicit examples we have introduced with Lajos Diosi. The essay is voluntarily provocative and certainly not optimally constructed. Perhaps I have tried a bit too hard to stretch my arguments to fit into the subject (and actually take its counterpoint). Anyway, while it was apparently shortlisted, it didn’t win the prize. So I am left with a rather specific essay I do not know what to do with.

I am not sure what will happen to it. The organizers of the contests kindly gave me the referee reports along with a suggestion to submit the essay (after corrections) to a philosophy of science journal. But I think I would need to substantially modify it to transform it into a proper article and I am a bit overwhelmed by the task. On the other hand, although imperfect, I think this essay made a few points that are insufficiently known. I am also tempted by the sunk cost fallacy: I have already spent quite some time on this and I would hate for it to be totally wasted. Before I make up my mind, you’re welcome to read the present version of the essay.

3) Ghirardi Rimini Weber model with massive flashes

This is an article I had already talked about here. It started as a simple toy model to explain the basic idea of new approaches to semiclassical gravity. Stimulated by the perspective that such non-bullshity foundational work might be acceptable in Physical Review Letters, I spent a bit of time polishing the sentences and shortening the article. The objective was to make it as easy to read as possible (thanks to Dustin Lazarovici who helped me a great deal). This is a decently good letter I think, in that it is really self contained and readable by a general audience interested in foundations (and not the grandiosely oversold summary of a 20 page Supplementary Material that letters tend to be). Well, PRL eventually refused but it was accepted as is in Phys. Rev. D Rapid Communications which is probably the next best I could hope for. It will soon be published but you can check the latest version already here.