Standard and non-standard existential predictions

Existential predictions are very rare in science. For this reason, when scientists can predict the existence of a new entity—a new planet, or a new particle, for example—we feel that something very exciting is going on! But what is going on is not only exciting—it is also very interesting from a philosophical point of view. These predictions raise various philosophically interesting puzzles, which relate to the epistemological, metaphysical, and methodological roles that mathematics can play in scientific representation. In this post, I want to present two main kinds of existential prediction in science and to sketch some philosophical problems related to them. However, I will not address these problems in this context. The interested reader may refer to Ginammi (2016) for a more detailed analysis and for a solution to these problems.1

Typically, these predictions are made possible by an accurate mathematical representation of the physical system (or phenomenon) at issue. This raises questions like: How can we predict the existence of a new, concrete entity on the basis of mathematical, abstract considerations? How can mathematics, which can be defined as the study of possible structures, say something about elements in real structures? How can mathematics, which is developed by mathematicians for completely different purposes, turn out to be so effective in representing physical reality and fostering new discoveries?

Standard predictions and the DN model

A classical example of existential prediction is the prediction of the planet Neptune. During the first half of the XIXth century, astronomers noticed several anomalies in the astronomical tables for the planet Uranus (Neptune’s neighbour). Briefly speaking, Uranus was not exactly where astronomers expected to find it. The deviation from the expected position was minimal, but enough to raise concerns. There were two possible explanations for this deviation: the first was that the theory (from which the predictions were derived) is wrong; the second was that not the theory, but the initial conditions were wrong. Now, the “theory”, in this case, is Newton’s gravitation theory, and nobody at that time could seriously suppose that Newton’s gravitation theory was wrong! Therefore, astronomers opted for the second possibility, that they missed something in the initial conditions of the Solar system. What could they have missed? Well, an analysis of the Uranus tables suggested that probably there was an unknown body that was altering Uranus’ orbit. Thus, they started working on this, and they eventually made a prediction: that there is an eighth planet in the Solar system—Neptune, indeed. In order to account for the previous anomalies, this planet had to be such-and-such, with such-and-such mass, at a such-and such position, with a such-and-such speed, and so on… Finally, in 1846, the new planet was actually observed, and its prediction was thus confirmed.

Now, which role did mathematics play in this prediction? An important role, of course, but not so decisive. Mathematics seems to have been used in this case only to predict the characteristics of the new planet: based on accurate calculations, astronomers established that, in order to explain away the anomalies, the planet should have had the predicted characteristics. However, mathematics did not really play any role in the prediction; the prediction was suggested by the anomalies. In other words, mathematics did not suggested the existence of anything, it only permitted to precise the conditions under which the new entity could explain away the anomalies.

This kind of existential prediction can be easily accounted by the so-called deductive-nomological model (DN hereafter) proposed by Hempel (1965).2 According to this model, to explain a scientific fact is to derive this fact from a set of laws of nature plus some initial conditions. Analogously, we can also predict a scientific fact by applying the laws to the proper circumstances (described by the initial conditions). However, when the predictions are not confirmed by experience and we end up with anomalies (i.e., when a physical system—previously thought to obey a certain set of laws—appear to behave as if the laws do not apply to it anymore), scientists will try to explain the reasons of this anomalous behaviour. In other words, they will try to “explain away” the anomalies. According to the DN model, this amounts to show how this anomalous behaviour (assuming that the measurements revealing the anomalies are accurate enough) can be derived from a set of laws and initial conditions. They have to options: either they change the laws, or they change the initial conditions. The first option is highly expensive, thus they usually prefer to opt—at least as a first step—for the second one. This option involves a revision of the initial conditions, and this revision may consists in a supplementation of them, thus predicting a new entity. The criteria guiding this implementation are defined by the DN model: the new initial conditions must be such that, by means of them, we must be able to derive (and hence to explain) the behaviour we previously considered anomalous from our set of laws.

We can call “standard” this kind of predictions.3 However, not all existential predictions are like planet Neptune’s one. Let’s give a look to a completely different kind of existential predictions—a kind of prediction that cannot be accounted by means of DN model.

Non-standard predictions: the omega minus particle case

An example of non-standard prediction is the prediction of the so-called “omega minus particle”, independently predicted by Gell-Mann and Ne’eman in 1962 and then discovered in 1963. We can get a rough—but still precise—comprehension of this prediction by reading the following passage from Ne’eman and Kirsh (1996):4

In 1961 four baryons of spin \frac{3}{2} were known. These were the four resonances \varDelta^-, \varDelta^0, \varDelta^+, \varDelta^{++} which had been discovered by Fermi in 1952. It was not clear that they could not be fitted into an octet, and the eightfold way predicted that they were part of a decuplet or of a family of 27 particles. A decuplet would form a triangle in the S - I3 [strangeness-isospin] plane, while the 27 particles would be arranged in a large hexagon. (According to the formalism of SU(3), supermultiplets of 1, 8, 10 and 27 particles were allowed.) In the same year (1961) the three resonances \varSigma(1385) were discovered, with strangeness -1 and probable spin \frac{3}{2}, which could fit well either into the decuplet or the 27-member family.
At a conference of particle physics held at CERN, Geneva, in 1962, two new resonances were reported, with strangeness -2, and the electric charge -1 and 0 (today known as the \varXi(1530)). They fitted well into the third course of both schemes (and could thus be predicted to have spin \frac{3}{2}). On the other hand, Gerson and Shoulamit Goldhaber reported a ‘failure’: in collisions of \mathrm{K}^+ or \mathrm{K}^0 with protons and neutrons, one did not find resonances. Such resonances would indeed be expected if the family had 27 members.

The creators of the eightfold way, who attended the conference, felt that this failure clearly pointed out that the solution lay in the decuplet. They saw the pyramid [see fig. above] being completed before their very eyes. Only the apex was missing, and with the aid of the model they had conceived, it was possible to describe exactly what the properties of the missing particle should be! Before the conclusion of the conference Gell-Mann went up to the blackboard and spelled out the anticipated characteristics of the missing particle, which he called ‘omega minus’ (because of its negative charge and because omega is the last letter of the Greek alphabet). He also advised the experimentalists to look for that particle in their accelerators. Yuval Ne’eman had spoken in a similar vein to the Goldhabers the previous evening and had presented them in a written form with an explanation of the theory and the prediction. (pp. 202-203)

In this case, it does not seem that the prediction has been made in order to explain away an anomaly. What should the anomaly be in this case? The fact that there is an empty place in the decuplet scheme cannot be considered an anomaly, because this empty place does not undermine the natural laws at issue. Consider the following hypothetical case. Imagine the prediction of the existence of a tenth spin-\frac{3}{2} baryon turned out to be wrong. This failure could take two different forms:

  1. we did not find any particle—at all;
  2. we did find a tenth particle, but this tenth particle had completely different characteristics from the ones predicted by the decuplet scheme.

In the second case, we would have a real anomaly, since the measurements cannot be accounted for by our theory. In case (A), instead, the anomaly seems to consist simply in the fact that the symmetry scheme could turn out to have an empty place. But if this were the case, would it be really an anomaly? My answer is: No, it is not! Supposing that experimentalist physicists had not found any new particle corresponding to the characteristics pointed out, should we drop the SU(3) symmetry scheme? This seems unreasonable, for it can still be regarded as a valuable tool for representing the class of spin-\frac{3}{2} baryons.

Hence, the fact that the formalism seems to commit us to the existence of an entity that does not exist cannot be regarded as wrong. There are many cases in which a formalism seems to commit us to entities that we do not regard as actually existing, but still we continue to use those formalisms without worrying about these “fictional” entities. We can consider the case of the applicability of analytic functions to thermodynamic: we know we can treat the critical temperature of a ferromagnet as an analytic function of the number of its dimensions. But, since we cannot calculate the problem for the 3-dimensional magnet, we calculate it for a 4-dimensional magnet, then we expand the function as a power series in a complex plane around the number 4, and finally we plug in the value 3. Now, the problem is that in this procedure we may end up, at a certain point, with dimensions like 3.5, or even 2+3i! The analytic function is used here just like a formal trick—and it perfectly works! Now, the point is: Should we accept such weird magnets’ dimensions as physically real just because they appear in the formalism? Of course not! Should we abandon such a calculational tool just because it seems to commit us to weird dimensions? Not even! We just accept it as a weird consequence of the mathematical trick we are exploiting.

Cases like these—and the hypothetical failure (A) for the omega minus prediction falls within this group — point out that there is an important distinction to be made here about the representative role of mathematics in physics. On a first approximation, we can say that a mathematical structure can play a representative role without being fully representative; or, in a slightly different terminology, we can say that a mathematical structure playing a representative role can be either `perfectly fitting’ or `redundant’ (i.e., `not perfectly fitting’). In the first case, every element in the mathematical structure plays a representative role; in the second, this is not the case. Importantly, the fact that a mathematical structure is `redundant’ does not necessarily undermine its representative effectiveness.

I must say that not everybody would agree with this analysis. Bangu (2008), for example, thinks that in this case there is an anomaly to be explained away, and that the anomaly is precisely the empty place in the decuplet scheme. However, even if Bangu and I do not agree on this point, we both agree on the fact that this prediction cannot be accounted in the same way as other “standard” existential prediction. The reason is that, even if you think that in this case there is an anomaly to be explained away and that the new particle has been predicted in order to explain away this anomaly, still the way in which this prediction has been made is very peculiar. Indeed, look at how Gell-Mann and Ne’eman predicted the characteristics of this new particle: they did not consider the interaction of the new entity with other particles in the scheme. They simply looked at the scheme and they extracted the relevant information out of it! If in the case of the planet Neptune the characteristics of the new entity are derived by considering the interactions of the alleged new entity, in this case the procedure is completely different!

What is even more interesting in this case, is the fact that in this prediction mathematics seems to play a very peculiar role. Mathematics is used here to represent a certain class of particles, but this representation turns out to have a wonderful heuristic potential! Where does this heuristic potential come from? What is really surprising is that it seems that this heuristic potential was already enclosed in the representative effectiveness of the mathematical structure employed. Indeed, the prediction of this new physical entity seems to be motivated only by the mathematics employed. Just to be clear: this does not amount to saying that no empirical fact played a role in shaping the prediction. What I am stressing here, is that the justification for the prediction seems to be purely mathematical—namely, purely based on the mathematical formalism employed.5

This peculiarity of mathematics seems not to be limited to this case or to existential predictions only. In his (1998) famous book,6 Mark Steiner argues that the role of mathematics in contemporary physics is really unique. According to him, very often contemporary physicists draw important consequences about the physical world by relying on purely formal mathematical considerations, or “analogies”, which seem not to be in any sense rooted in the content of the mathematical representations. In this sense, the applicability of mathematics turns out to be “magic” or—as Wigner (1960) would have put it—“miraculous”. Steiner himself justifies the appropriateness of the word “magic” in this context:

Expecting the forms of our notation to mirror those of (even) the atomic world is like expecting the rules of chess to reflect those of the solar system. I shall argue, though, that some of the greatest discoveries of our century were made by studying the symmetries of notation. Expecting this to be any use is like expecting magic to work. (Steiner 1998, p. 72)


The philosophical problem I have sketched in this post can be summed up as follows: Where does this heuristic effectiveness of mathematics come from? How can a mathematical structure disclose such a heuristic potential? Under which conditions a mathematical structure can reveal its heuristic effectiveness? And finally: Since not all mathematical structures seem to have such a heuristic effectiveness, how can we distinguish between heuristically fruitful mathematical representations and heuristically fruitless ones?

In my article Avoiding Reification I have analysed the prediction of the omega minus particle, I have addressed all these questions, and I have suggested an answer to them. The interested reader can give a look to this article, as well as to all the articles and books quoted in this post.

[1] Ginammi, Michele (2016), “Avoiding reification: Heuristic effectiveness of mathematics and the prediction of the omega minus particle”, Studies in History and Philosophy of Modern Physics, vol. 53, February, pp. 20-27.
[2] Hempel, Carl G. (1965), Aspects of Scientific Explanations, Free Press, New York.
[3] Another example of this kind of predictions is Pauli’s prediction of the neutrino. Also in this case we have an anomaly; the new entity is postulated just in order to explain away the anomaly; and mathematics is used in order to derive the appropriate characteristics of this new entity (appropriate to deduce—together with the proper laws of nature—the behaviour that was previously puzzling).
[4] Ne’man, Yuval. and Kirsh, Yoram (1996), The Particle Hunters, Cambridge University Press, Cambridge.
[5] Other examples of this kind of predictions are Dirac’s prediction of the so-called positron, or Mendeleev’s prediction of new chemical elements on the base of the periodic table. A more recent example is the prediction of the Higgs boson—the so-called “God’s particle” (I must admit, however, that I do not know much about this particular case, therefore I could be wrong on this point). All these cases share the fact that the prediction seems to be justified by purely mathematical considerations.
[6] Steiner, Mark (1998), The Applicability of Mathematics as a Philosophical Problem, Harvard University Press, Cambridge, Mass.

One thought on “Standard and non-standard existential predictions

  1. Pingback: A new planet in the Solar System? | The inductivist turkey

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s