[Dr. Chris Oakley's home page] [More comments about academic research]

The search for a quantum field theory

"[Renormalization is] just a stop-gap procedure. There must be some fundamental change in our ideas, probably a change just as fundamental as the passage from Bohr's orbit theory to quantum mechanics. When you get a number turning out to be infinite which ought to be finite, you should admit that there is something wrong with your equations, and not hope that you can get a good theory just by doctoring up that number."

— Paul Dirac, Nobel laureate 1933

"The shell game that we play ... is technically called 'renormalization'. But no matter how clever the word, it is still what I would call a dippy process! Having to resort to such hocus-pocus has prevented us from proving that the theory of quantum electrodynamics is mathematically self-consistent. It's surprising that the theory still hasn't been proved self-consistent one way or the other by now; I suspect that renormalization is not mathematically legitimate."

— Richard Feynman, Nobel laureate 1965

Quantum Field Theory purports to be the most fundamental of sciences in that it concerns the ultimate constituents of matter. The term "quantum field theory" is used interchangeably with "particle physics" and "high energy physics" on the grounds that the experimental support for this theory comes from expensive experiments involving high-energy beams of particles. Although such multi-billion-dollar experiments are needed to push the boundaries, the theories of course claim to be universal, and should apply equally to the familiar and everyday world.

Current practitioners in the field will no doubt bemoan the fact that taxpayers of the world are increasingly less willing to find the money to pay for this esoteric study. Do they really care whether there are Higgs particles or heavier flavours of quarks? Probably not. Why not? Because it really makes no difference to their own lives. Their message is clear: study theology, philosophy or "useless" branches of science if you will, but if the cost is more than a few academic salaries then don't expect us to pay.

It was not always thus. In the late seventies and early eighties, many of the general public followed the unfolding drama of the fundamental particle world with keen interest, and did not seem to mind about the cost of the experiments. It was, after all, a prestige project, like the Apollo program.

A lot of it had to do with the names: "quarks" with their different "flavours" and "colours", with qualities like "strangeness" and "charm". The "gluons" that bound them together. The enormous and mind-blowing scale of the experiments at CERN or Fermilab or SLAC. The enthusiastic (if largely uncomprehended) explanations by academics in the field, with their diagrams and manic gesticulations. At one time, it seemed that every other popular science program on TV was about particle physics. Indeed, I remember a marathon one on BBC2 around 1976, whose climactic moment was completing of the pattern of SU(3) Hyperons by the Omega Minus. A particle predicted by theory and subsequently discovered by experiment. What could be more satisfying? I remember in particular interviews with Abdus Salam and Richard Feynman. Salam did his plug for Indian culture by talking about the domes of the Taj Mahal and how their symmetry made them beautiful. His point was that this principle of symmetry could be applied to physics as well. The other thing I remember was Feynman saying that he was not entirely comfortable with "gauge" theories, but then he was an old timer, and what did he know? (Looking back on it now, that was rich, coming from him, since he won a Nobel prize for the original gauge theory - quantum electrodynamics).

It was hard not to get swept up in this. Oxford's contribution at the time (1980) was a series of public lectures at Wolfson College, the most memorable of which was given by Murray Gell-Mann, one of the leading lights in the field.

Both Tim Spiller, my tutorial partner, and I wanted to do research in the field, and both of us succeeded. He did a Ph.D. at Durham University and I a D.Phil. at Oxford, following a one-year course at Cambridge to study the relevant mathematics. Tim and I were the bane of our tutors as undergraduates because of the way we would never accept "hand-waving" (unrigorous) explanations. I like to think that the good side of this fussiness was that the theses we eventually produced (in totally different branches of field theory) were of higher quality than average.

Just what gauge theories and renormalization are I did not discover until I went to Cambridge. This renormalization failed the hand-waving test dismally.

This is how it works. In the way that quantum field theory is done - even to this day - you get infinite answers for most physical quantities. Are we really saying that particles can be produced with infinite probabilities in scattering experiments? (Probabilities - by definition - cannot be greater than one). Apparently not. We just apply some mathematical butchery to the integrals until we get the answers we want. As long as this butchery is systematic and consistent, whatever that means, then we can calculate regardless, and what do you know, we get fantastic agreement between theory and experiment for important measurable quantities (the anomalous magnetic moment of leptons and the Lamb shift in the Hydrogen atom), as well as all the simpler scattering amplitudes.

"You may have eleven significant figures of agreement, but you cheated to get it, and so it does not count," I say.

"What does it matter," they say. "This can't be a coincidence. What we have here has got to be the best theory ever."

"It's not a theory," I say. "It's just rubbish. I did not spend years learning the rules of mathematics just to abandon them the first time they turn out to be inconvenient."

As long as I have known about it I have argued the case against renormalization. On the other hand I did want to get my degree, so I just chose research that avoided confronting the issue. I tried to get to grips with some of the things that one has to deal with before getting to renormalization: issues related to non-interacting fields, such as the spin-statistics theorem and Lagrangians for particles of higher spin. There were a few interesting things to explore, mostly in clarifying the connection between work in the area and the underlying principles, which is what I did for my doctoral thesis. This was done by March, 1984, which left me with a few months in hand, so I started looking at renormalization again to see if I could make any more sense of it second time round.

I then discovered something very nice. If the field is written as a power series in the coupling constant, then the field equations enable a simple reduction of an interacting field in terms of the free field and any amplitude can be calculated just by inspection. I wrote this up in a preprint here. The idea was so simple that I found it hard to believe that I was the first to see it. Well, there is nothing new under the sun, and sure enough - as I discovered in late 2005/early 2006 - what I had was an old idea that had just withered on the vine. Stueckelberg1 thought of it first, in 1934, but Källén2 (who seems not to have been aware of Stueckelberg's work), also thought of it in 1949.

The idea has two consequences that ought to have given the founders of quantum mechanics a lot of grief. First, the local field equations they would expect to be able to use give nonsensical, infinite answers (a feature, incidentally, of every other treatment of quantum field theory). Secondly, properties such as orthonormality of a basis of particle states at constant time no longer apply. For the former, one way round it is not to use rigidly local field equations. The only reason for choosing one field equation over another is to get agreement with experiment (although normally in such a way as to incorporate experimentally-founded beliefs in invariance principles such as special relativity). If local field equations give infinite answers then obviously they are not agreeing with experiment. However, it is possible to make an adjustment - known as "normal-ordering" - which eliminates the problem, at least in the Källén-Stueckelberg approach. The latter problem is Haag's theorem: in the presence of interactions, it is always assumed that the Hamiltonian can be split into a "free" part and an "interaction". The "free" part is used to define an orthonormal basis of states to which the interaction applies. But Haag's theorem says that this is not possible, or to put it another way, it is not possible to construct a Hamiltonian operator that treats an interacting field like a free one. Haag's theorem forbids us from applying the perturbation theory we learned in quantum mechanics to quantum field theory, a circumstance that very few are prepared to consider. Even now, the text-books on quantum field theory gleefully violate Haag's theorem on the grounds that they dare not contemplate the consequences of accepting it.

However, in my view, acceptance of Haag's theorem is a very good place to start. The next paper I wrote, in 1986, follows this up. It takes my 1984 paper and adds two things: first, a direct solving of the equal-time commutators, and second, a physical interpretation wherein the interaction picture is rediscovered as an approximation.

With regard to the first thing, I doubt if this has been done before in the way I have done it3, but the conclusion is something that some may claim is obvious: namely that local field equations are a necessary result of fields commuting for spacelike intervals. Some call this causality, arguing that if fields did not behave in this way, then the order in which things happen would depend on one's (relativistic) frame of reference. It is certainly not too difficult to see the corollary: namely that if we start with local field equations, then the equal-time commutators are not inconsistent, whereas non-local field equations could well be. This seems fine, and the spin-statistics theorem is a useful consequence of the principle. But in fact this was not the answer I really wanted as local field equations seemed to lead to infinite amplitudes. It could be that local field equations with the terms put into normal order - which avoid these infinities - also solve the commutators, but if they do then there is probably a better argument to be found than the one I give in this paper. Substituting Haag expansions (arbitrary sums of normal-ordered tensor products of free fields) directly into the commutators is an obvious thing to try here. I did make fumbling attempts around October 2001, without making much progress, but I think that with a little more ingenuity, there could be a solution out there.

With regard to the second thing, the matrix elements consist of transients plus contributions which survive for large time displacements. The latter turns out to be exactly that which would be obtained by Feynman graph analysis. I now know that - to some extent - I was just revisiting ground already explored by Källén and Stueckelberg4.

My third paper applies all of this to the specific case of quantum electrodynamics, replicating all scattering amplitudes up to tree level. As for reproducing the "successes" of traditional QED, namely the Lamb shift and the anomalous magnetic moment of leptons, I do not know. I would want to be confident that I had an understanding of bound states before I attempted a Lamb shift calculation and would want to be sure I understood the classical limit of the photon field before I tried to calculate the anomalous magnetic moment of leptons. Finding time to do it is the problem.

Here is the correspondence I had with the journals. It seems that my greatest adversaries were the so-called "axiomatic field theorists", who not content just to disagree, appeared to be determined to ensure that nothing I wrote ever got into print. It is interesting, by the way, that such a group should exist at all. After all, one does not have "axiomatic" atomic physicists, "axiomatic" chemists or "axiomatic" motorists. Since "axiomatic" simply means obeying the rules, the usage here ought be a pleonasm. Either one is a field theorist, in which case - unless one can formulate better ones - one is bound by the rules, or one is not a field theorist at all. If you were stopped for speeding and told the policeman that you were not an axiomatic motorist and therefore not subject to traffic regulations it might not help your case. My interaction with the axiomatic field theorists, though, only reminded me of why they are an irrelevance, and  nowadays it seems that the best they can come up with is a scheme where extra degrees of freedom are introduced midstream and a contrived "limit" of the newly-introduced parameters taken (the Epstein-Glaser method). Outside their group, this would be called renormalization, and if the self-appointed guardians of mathematical propriety are sanctioning renormalization, to whom is one supposed to turn?

My proposal, in a nutshell, is this: write the interacting fields as sums of tensor products of free fields. Use coefficients in the expansion that are almost those which follow from the usual local equations of motion. I say "almost" because the terms must appear in normal order. Then use the known properties of free fields to evaluate the matrix elements directly. Comparison of these expressions with time-dependent perturbation theory (from ordinary quantum mechanics) shows that these consist of transients plus the tree-level Feynman graph amplitudes. As no re-definition of the mass, coupling or field operators is required (infinite or otherwise), there is no renormalization.

Unfortunately for me, though, most practitioners in the field appear not be be bothered about the inconsistencies in quantum field theory, and regard my solitary campaign against infinite subtractions at best as a humdrum tidying-up exercise and at worst a direct and personal threat to their livelihood. I admit to being taken aback by some of the reactions I have had. In the vast majority of cases, the issue is not even up for discussion.

The explanation for this opposition is perhaps to be found on the physics Nobel prize web site. The six prizes awarded for quantum field theory are all for work that is heavily dependent on renormalization. These are as follows:

By these awards, the Swedish Academy is knowingly endorsing shoddy science, reflecting the fact that shoddy science has become mainstream. Particle physicists now accept renormalization more or less without demur. They think that have solved the problem, but the reality is just that they have given up trying. Some even seem to be proud of their efforts, lauding the virtues of makeshift "effective" field theories that can be inserted into the infinitely-wide gap defined by infinity minus infinity. Despite this, almost all concede that things could be better, it is just that they consider that trying to improve the situation is ridiculously high-minded and idealistic. None that I have talked to expect to see a solution in their lifetimes. They think it possible that the universe might have 10, 11 or 26 dimensions (according to Edward Witten's mood that day), but they absolutely do not believe that calculations (in four dimensions) that they studied when they were graduate students can be done without mathematical sleight of hand. Neither do any appear to be interested in investigating the possibility. As with a lot of things, Feynman had a nice way of putting it: "Renormalization is like having a toilet in your house. Everyone knows it's there, but you don't talk about it." But personally, I do not see how fundamental physics can move on until the problem is solved. Before it can be solved, it needs to be addressed, and before it can be addressed, it needs to be acknowledged. I struggle even to get the problem acknowledged. We now have a situation where if you asked a theoretical particle physicist, "Would you like to be able to calculate the Lamb Shift without any renormalization?" you would get the answer, "Of course!" But if you then asked, "Are you prepared to sponsor a project that attempts this?" the answer would always be "No". Ultimately, in taking this attitude, they only harm themselves. Botching may be an unavoidable part of many practical endeavours, where deadlines have to be met and customers have to be satisfied, but on the research frontier there can be no justification.

For those of you who are swayed by arguments from authority (I like to think that I am not one of them, by the way), one could almost make the case against renormalization on these grounds. Backing the view of one Nobel prizewinner (Dirac) against the twelve listed above could be justified by saying that excluding Feynman - who in any case had plenty of doubts about renormalization - Dirac made more impact on physics than the others put together6.

Physicists are first and foremost scientists. They are not primarily mathematicians and they are not religious zealots (at least not in regard to work). The extent to which they are permitted to believe their explanations is the extent to which they are verified in experiments. They therefore are entitled to strong faith in quantum mechanics and special relativity, both of which seem to pass all of the multifarious experimental tests thrown at them. They are also entitled to believe in vector particles mediating the weak interaction. They are entitled to believe in quarks. The following however are less certain: general relativity as the theory of gravity and quantum chromodynamics as the theory of the "strong" nuclear force.

To take the first, the "proofs" of General Relativity are light bending, the precession of the perihelion of Mercury and gravitational red-shift. All of these are tiny effects, and whilst the results do not contradict G.R., they do not mean that it is the only possible explanation either. General Relativity is like quantum mechanics in that it is not so much a theory as a whole way of thinking, and it can be very hard to fit something as grandiose as this with other frameworks, quantum mechanics in particular. If there is a conflict between quantum mechanics and G.R. then the scientist (if not the mathematician) is forced to choose quantum mechanics. With gravity the experimental data, or at least, data that cannot be explained by Newtonian gravity are incredibly sparse compared to the results that support quantum mechanics. What we would like are experiments that test gravity at the microscopic level, in the same way that Quantum Optics tests electromagnetism at the quantum level, but will we ever get these? The inability to get an experimental handle on quantum gravity makes me wonder whether it even exists at all in its own right. Might gravity be just some kind of residual of other forces, like the van der Waals attraction in chemistry? Assuming that this notion is wrong, what about strong gravitational fields? The fact is that we know nothing about the world of strong gravitational fields, a fact which has not stopped Astrophysicists giving names to objects that are supposed to have such, such as neutron stars and black holes. Unfortunately, an observatory is not a laboratory. It is very hard to understand or even demonstrate the existence of such objects unless you have a degree of control over them.

The other area of uncertainty is, to my mind, the "strong" nuclear force. The quark model works well as a classification tool. It also explains deep inelastic lepton-hadron scattering. The notion of quark "colour" further provides a possible explanation, inter alia, of the tendency for quarks to bunch together in groups of three, or in quark-antiquark pairs. It is clear that the force has to be strong to overcome electrostatic effects. Beyond that, it is less of an exact science. Quantum chromodynamics, the gauge theory of quark colour is the candidate theory of the binding force, but we are limited by the fact that bound states cannot be done satisfactorily with quantum field theory. The analogy of calculating atomic energy levels with quantum electrodynamics would be to calculate hadron masses with quantum chromodynamics, but the only technique available for doing this - lattice gauge theory - despite decades of work by many talented people and truly phenomenal amounts of computer power being thrown at the problem, seems not to be there yet, and even if it was, many, including myself, would be asking whether we have gained much insight through cracking this particular nut with such a heavy hammer.

A talk (slides in PDF) on my work given at the University of Dortmund (Germany) on 12 May 2011. I got into e-mail correspondence with the Professor, Heinrich Päs, who is writing a popular science book that mentions my great-uncle Willem. In the course of our correspondence I managed to invite myself to give a talk on my quantum field theory work - the first in over 20 years. The audience was a young crowd of mostly phenomenologists. I found, to my slight surprise, that I was pushing at an open door here as the audience seemed to agree with my criticisms of renormalization. Whether anything will come of it, I know not, but if I give the talk again then there will be more slides as I had finished the main talk in about half an hour.

The only academic to have studied my arguments in the last 20 years I am aware of was former Harvard assistant professor Lubos Motl. He decided on, I guess, about a minute's examination that what I was doing was Feynman-Dyson perturbation theory in disguise. Quantum field theory is, admittedly, a difficult subject, but more people ought to know at least the bare essentials. I therefore give them here. Knowledge of QFT to first-year graduate level is assumed.

On 26 February 2012 I was interviewed by Philip Mereton on WebTalk radio. The link is here. Saying that I was "driven out of the physics community for voicing objections to the fudging problem" is maybe dramatising a little, but it is certainly true to say that I did not leave physics voluntarily.


Footnotes (click on the footnote number to return to the relevant point in the text):

1 Relativistisch invariante Störungstheorie des Diracschen Elektrons, by E.C.G. Stueckelberg, Annalen der Physik, vol. 21 (1934). My German not being everything it should be, I have relied on a pre-digested version of this paper given here: The Road to Stueckelberg's Covariant Perturbation Theory as Illustrated by Successive Treatments of Compton Scattering, by J. Lacki, H. Ruegg and V. L. Telegdi (1999). My thanks to Danny Ross Lunsford for drawing attention to the latter.

2 Mass- and charge-renormalizations in quantum electrodynamics without use of the interaction representation, Arkiv för Fysik, bd. 2, #19, p.187 (1950) and Formal integration of the equations of quantum theory in the Heisenberg representation, Arkiv för Fysik, bd. 2, #37, p.37 (1950). Both by Gunnar Källén. These are actually in the 1951 volume in the library I used (the Bodleian in Oxford). The work finds its way into his text book Quantum Electrodynamics (pub. by George, Allen and Unwin (1972)), pp.79-85, although instead of following the idea through as I did, he immediately gives up, using Feynman-Dyson perturbation theory in the remainder of the book.

3 Actually, this paper: On quantum field theories, Matematisk-fysiske Meddelelser, 29, #12 (1955), by Rudolf Haag, solves spacelike commutators in §5, but for the restricted case of Φ3 theory, and just to first order in the power series expansion. Unlike my analysis, Haag places no restrictions on the first time derivative of the field, and finds a slightly more general solution, where some derivative couplings are allowed.

4 A few comments, though. Stueckelberg uses the power series expansion in the coupling, and the residue of the energy conservation pole to provide the physical interpretation. Something very similar is used in my papers. However, Stueckelberg is not properly second quantized: his photons are modes in a cavity, and his electrons are wave functions rather than field operators. Fermions cannot be created or destroyed and so the only process he can treat is Compton scattering (the scattering of a single photon off a single electron). Interestingly, his subsequent papers seem to indicate that he soon abandoned the covariant approach, instead switching to, and often anticipating, the non-explicitly-covariant S-matrix methods of Dyson, Feynman and Schwinger. Could he have given up because of problems with the Interaction Picture? We may never know. Given that his 1934 methods are so much simpler, more elegant and more powerful, I am still amazed that he would willingly stop working on them. Källén's papers, sixteen years later, are properly second quantised, but his physical interpretation is less elegant as his best ambition is just to reproduce Dyson's results, which, strictly, apply only to scattering processes. Källén's papers would be easier to read if he made more use of four-dimensional momentum space. He works out a graphical representation which he claims are just Feynman diagrams, but they are not. They are more like the diagrams in my papers which have two kinds of line for each particle type. The two types of line are confusingly drawn the same in his paper, even though he then goes on to calculate the amplitudes correctly by treating them differently.

5 Journalists please, please, please stop calling the Higgs the "God" particle! Firstly, it does not give mass to all particles in the universe. It only "gives mass" (if you want to call it that) to the massive vector bosons that mediate the weak interaction - a small subset. Secondly, if something has no rest mass it does not mean that it does not exist! Take the photon. Not having had benefited from the "God" particle, it has zero rest mass. Are you saying therefore that it does not exist? So you do not believe in electricity, magnetism or light?

6 From The Strangest Man, a biography of Dirac by Graham Farmelo, we have an indication of Dirac's strength of feeling about renormalization (p. 409):

[In 1983, Pierre Ramond] invited Dirac to give a talk on his ideas at Gainesville any time he liked, adding that he would be glad to drive him there and back. Dirac responded instantly: "No! I have nothing to talk about. My life has been a failure!"
Ramond would have been less stunned if Dirac had smashed him over the head with a baseball bat. Dirac explained himself without emotion: quantum mechanics, once so promising to him, had ended up unable even to give a proper account of something as simple as an electron interacting with a photon - the calculations ended up with meaningless results, full of infinities. Apparently on autopilot, he continued with the same polemic against renormalisation he had been delivering for some forty years. Ramond was too shocked to listen with any concentration. He waited until Dirac had finished and gone quiet before pointing out that there already existed crude versions of theories that appeared to be free of infinities. But Dirac was not interested: disillusion had crushed his pride and spirit.