A prominent professor in the philosophy of mathematics once told me that the key to writing an attractive philosophy paper is to present the reader with a puzzle. “Give me a puzzle, and I’ll be interested”, he said. As I was surrounded by mathematicians and philosophers of mathematics which were steadily exchanging puzzles,…]]>

Here is a very interesting post on possible alternatives to Cantor’s Transfinite Numbers Theory! Was Cantor’s notion of infinite really “unavoidable”?

The topic is particularly interesting when considered within the general framework of mathematical development. How does mathematics evolve? What “forces” mathematics to take some roads instead of other roads? And how do these choices influence the way in which we model and understand the natural world?

A prominent professor in the philosophy of mathematics once told me that the key to writing an attractive philosophy paper is to present the reader with a puzzle. “Give me a puzzle, and I’ll be interested”, he said. As I was surrounded by mathematicians and philosophers of mathematics which were steadily exchanging puzzles, I had no doubt that he was right: mathematicians and philosophers of mathematics like puzzles. But then, mightn’t it be the case that this fondness of puzzles influences much more than just our judgment of a philosophy paper (and our conversations over dinner)? Here’s a crazy idea – or maybe not so crazy – does our desire to be puzzled affect our judgement of a certain foundational mathematical theory?

The foundational mathematical theory which I have in mind is, of course, Cantor’s transfinite set theory. Given its general acceptance nowadays, it is easy to forget that in order to generalize arithmetic from the finite to…

View original post 948 more words

]]>

Killing Pluto was fun, but this is head and shoulders above everything else. (M. Brown)

It was only yesterday when I published a post on standard and non-standard existential predictions. Thus, I was very happy today when, opening the newspaper, I read that some astronomers have predicted the existence of a new planet in the Solar System! Michael E. Brown and Konstantin Batygin just published a paper on the current number of the Astronomical Journal, in which they argue in favour of the existence of a ninth planet, nearly the size of Neptune, orbiting the sun every 15,000 years. The news is also reported by the magazine *Science. *The existence of this “Planet X” has not been confirmed by any discovery yet, therefore for the moment this prediction is nothing but a simple hypothesis.

Whether or not this prediction will be confirmed, it seems to be analogous to Neptune’s prediction. As well as for Neptune, also the existence of this new planet has been postulated in order to “explain away” an anomaly. In this case, the anomaly concerns a clustering of six previously known objects that orbit beyond Neptune. What is anomalous, here, is the fact that these objects are grouped in a cluster and that their orbits are quite similar. As Brown and Batygin say, <<the perihelion positions and orbital planes of [*these*] objects are tightly confined and […] such a clustering has only a probability of 0.007% to be due to chance, thus requiring a dynamical origin>>. A ninth planet having the proper characteristics would explain away this “anomaly”.

It must be noted, however, that the anomaly that this ninth planet would “explain away” is a bit different from the anomaly that Neptune explained away. In the Neptune case, the anomaly was—so to say—much stronger than the clusters’ anomaly. There, the anomaly was something that the theory *could not account for*; here, the anomaly is something that the theory *could **account for*, but with a very low degree of probability. In other words, in the first case the anomaly *undermines* the theory, whereas in the second case the anomaly is still *compatible* with the theory but the probability that the disposition of these objects orbiting beyond Neptune is simply due to chance is really low. We can then distinguish between two kinds of anomaly—let’s call them “incompatible” and “compatible” anomalies.

This story also has an ironic shade, since Brown is better known as a planet slayer than a planet discoverer! His 2005 discovery of Eris, a remote icy world nearly the same size as Pluto, revealed that what was previously seen as the outermost planet was just one of many worlds in the Kuiper belt. As a consequence of this discovery, astronomers promptly reclassified Pluto as a dwarf planet. The whole story is recounted by Brown in his 2010 book *How I Killed Pluto and Why It Had It Coming*. However, as Brown reportedly commented on this, <<Killing Pluto was fun, but this [*the discovery of a new planet*] is head and shoulders above everything else>>.

]]>

Typically, these predictions are made possible by an accurate mathematical representation of the physical system (or phenomenon) at issue. This raises questions like: How can we predict the existence of a new, *concrete* entity on the basis of mathematical, *abstract* considerations? How can mathematics, which can be defined as the study of *possible* structures, say something about elements in *real* structures? How can mathematics, which is developed by mathematicians for* completely different purposes*, turn out to be so effective in representing physical reality and fostering new discoveries?

A classical example of existential prediction is the prediction of the planet Neptune. During the first half of the XIXth century, astronomers noticed several anomalies in the astronomical tables for the planet Uranus (Neptune’s neighbour). Briefly speaking, Uranus was not *exactly* where astronomers expected to find it. The deviation from the expected position was minimal, but enough to raise concerns. There were two possible explanations for this deviation: the first was that the theory (from which the predictions were derived) is wrong; the second was that not the theory, but the *initial conditions* were wrong. Now, the “theory”, in this case, is Newton’s gravitation theory, and nobody at that time could seriously suppose that Newton’s gravitation theory was wrong! Therefore, astronomers opted for the second possibility, that they missed something in the initial conditions of the Solar system. What could they have missed? Well, an analysis of the Uranus tables suggested that probably there was an unknown body that was altering Uranus’ orbit. Thus, they started working on this, and they eventually made a prediction: that there is an eighth planet in the Solar system—Neptune, indeed. In order to account for the previous anomalies, this planet had to be such-and-such, with such-and-such mass, at a such-and such position, with a such-and-such speed, and so on… Finally, in 1846, the new planet was actually observed, and its prediction was thus confirmed.

Now, which role did mathematics play in this prediction? An important role, of course, but not so decisive. Mathematics seems to have been used in this case only to predict the *characteristics* of the new planet: based on accurate calculations, astronomers established that, *in order to explain away the anomalies*, the planet should have had the predicted characteristics. However, mathematics did not really play any role in the prediction; the prediction was suggested by the anomalies. In other words, mathematics did not suggested the existence of anything, it only permitted to* precise the conditions under which the new entity could explain away the anomalies*.

This kind of existential prediction can be easily accounted by the so-called deductive-nomological model (DN hereafter) proposed by Hempel (1965).^{2} According to this model, to *explain* a scientific fact is to derive this fact from a set of laws of nature *plus* some initial conditions. Analogously, we can also *predict* a scientific fact by applying the laws to the proper circumstances (described by the initial conditions). However, when the predictions are not confirmed by experience and we end up with *anomalies* (*i.e.*, when a physical system—previously thought to obey a certain set of laws—appear to behave as if the laws do not apply to it anymore), scientists will try to explain the reasons of this anomalous behaviour. In other words, they will try to “explain away” the anomalies. According to the DN model, this amounts to show how this anomalous behaviour (assuming that the measurements revealing the anomalies are accurate enough) can be derived from a set of laws and initial conditions. They have to options: either they change the laws, or they change the initial conditions. The first option is highly expensive, thus they usually prefer to opt—at least as a first step—for the second one. This option involves a revision of the initial conditions, and this revision may consists in a supplementation of them, thus predicting a new entity. The criteria guiding this implementation are defined by the DN model: the new initial conditions must be such that, by means of them, we must be able to derive (and hence to explain) the behaviour we previously considered anomalous from our set of laws.

We can call “standard” this kind of predictions.^{3} However, not all existential predictions are like planet Neptune’s one. Let’s give a look to a completely different kind of existential predictions—a kind of prediction that *cannot* be accounted by means of DN model.

An example of non-standard prediction is the prediction of the so-called “omega minus particle”, independently predicted by Gell-Mann and Ne’eman in 1962 and then discovered in 1963. We can get a rough—but still precise—comprehension of this prediction by reading the following passage from Ne’eman and Kirsh (1996):^{4}

In 1961 four baryons of spin were known. These were the four resonances , , , which had been discovered by Fermi in 1952. It was not clear that they could not be fitted into an octet, and the eightfold way predicted that they were part of a decuplet or of a family of 27 particles. A decuplet would form a triangle in the [strangeness-isospin] plane, while the 27 particles would be arranged in a large hexagon. (According to the formalism of SU(3), supermultiplets of 1, 8, 10 and 27 particles were allowed.) In the same year (1961) the three resonances were discovered, with strangeness and probable spin , which could fit well either into the decuplet or the 27-member family.

At a conference of particle physics held at CERN, Geneva, in 1962, two new resonances were reported, with strangeness , and the electric charge and (today known as the ). They fitted well into the third course of both schemes (and could thus be predicted to have spin ). On the other hand, Gerson and Shoulamit Goldhaber reported a ‘failure’: in collisions of or with protons and neutrons, one did not find resonances. Such resonances would indeed be expected if the family had 27 members.

The creators of the eightfold way, who attended the conference, felt that this failure clearly pointed out that the solution lay in the decuplet. They saw the pyramid [see fig. above] being completed before their very eyes. Only the apex was missing, and with the aid of the model they had conceived, it was possible to describe exactly what the properties of the missing particle should be! Before the conclusion of the conference Gell-Mann went up to the blackboard and spelled out the anticipated characteristics of the missing particle, which he called ‘omega minus’ (because of its negative charge and because omega is the last letter of the Greek alphabet). He also advised the experimentalists to look for that particle in their accelerators. Yuval Ne’eman had spoken in a similar vein to the Goldhabers the previous evening and had presented them in a written form with an explanation of the theory and the prediction. (pp. 202-203)

In this case, it does not seem that the prediction has been made in order to explain away an anomaly. What should the anomaly be in this case? The fact that there is an empty place in the decuplet scheme cannot be considered an anomaly, because this empty place does not undermine the natural laws at issue. Consider the following hypothetical case. Imagine the prediction of the existence of a tenth spin- baryon turned out to be wrong. This failure could take two different forms:

- we did not find any particle—at all;
- we
*did*find a tenth particle, but this tenth particle had*completely different characteristics*from the ones predicted by the decuplet scheme.

In the second case, we would have a real anomaly, since the measurements cannot be accounted for by our theory. In case (A), instead, the anomaly seems to consist simply in the fact that the symmetry scheme could turn out to have an empty place. But if this were the case, would it be really an anomaly? My answer is: No, it is not! Supposing that experimentalist physicists had not found any new particle corresponding to the characteristics pointed out, should we drop the SU(3) symmetry scheme? This seems unreasonable, for it can still be regarded as a valuable tool for *representing* the class of spin- baryons.

Hence, the fact that the formalism seems to commit us to the existence of an entity that does not exist cannot be regarded as wrong. There are many cases in which a formalism seems to commit us to entities that we do not regard as actually existing, but still we continue to use those formalisms without worrying about these “fictional” entities. We can consider the case of the applicability of analytic functions to thermodynamic: we know we can treat the critical temperature of a ferromagnet as an analytic function of the number of its dimensions. But, since we cannot calculate the problem for the 3-dimensional magnet, we calculate it for a 4-dimensional magnet, then we expand the function as a power series in a complex plane around the number 4, and finally we plug in the value 3. Now, the problem is that in this procedure we may end up, at a certain point, with dimensions like 3.5, or even ! The analytic function is used here just like a formal trick—and it perfectly works! Now, the point is: Should we accept such weird magnets’ dimensions as physically real *just because they appear in the formalism? *Of course not! Should we abandon such a calculational tool just because it seems to commit us to weird dimensions? Not even! We just accept it as a weird consequence of the mathematical trick we are exploiting.

Cases like these—and the hypothetical failure (A) for the omega minus prediction falls within this group — point out that there is an important distinction to be made here about the representative role of mathematics in physics. On a first approximation, we can say that a mathematical structure can play a representative role *without being fully representative*; or, in a slightly different terminology, we can say that a mathematical structure playing a representative role can be either `perfectly fitting’ or `redundant’ (*i.e*., `not perfectly fitting’). In the first case, *every* element in the mathematical structure plays a representative role; in the second, this is not the case. Importantly, the fact that a mathematical structure is `redundant’ does not necessarily undermine its representative effectiveness.

I must say that not everybody would agree with this analysis. Bangu (2008), for example, thinks that in this case *there is* an anomaly to be explained away, and that the anomaly is precisely the empty place in the decuplet scheme. However, even if Bangu and I do not agree on this point, we both agree on the fact that this prediction cannot be accounted in the same way as other “standard” existential prediction. The reason is that, even if you think that in this case *there is* an anomaly to be explained away and that the new particle has been predicted in order to explain away this anomaly, still the way in which this prediction has been made is very peculiar. Indeed, look at how Gell-Mann and Ne’eman predicted the characteristics of this new particle: they did not consider the interaction of the new entity with other particles in the scheme. They simply looked at the scheme and they *extracted* the relevant information out of it! If in the case of the planet Neptune the characteristics of the new entity are derived by considering the *interactions* of the alleged new entity, in this case the procedure is completely different!

What is even more interesting in this case, is the fact that in this prediction mathematics seems to play a very peculiar role. Mathematics is used here to *represent* a certain class of particles, but this representation turns out to have a wonderful *heuristic* potential! Where does this heuristic potential come from? What is really surprising is that it seems that this heuristic potential was already enclosed in the representative effectiveness of the mathematical structure employed. Indeed, the prediction of this new physical entity seems to be motivated *only* by the mathematics employed. Just to be clear: this does not amount to saying that *no empirical fact* played a role in shaping the prediction. What I am stressing here, is that the *justification* for the prediction seems to be purely mathematical—namely, purely based on the mathematical formalism employed.^{5}

This peculiarity of mathematics seems not to be limited to this case or to existential predictions only. In his (1998) famous book,^{6} Mark Steiner argues that the role of mathematics in contemporary physics is really unique. According to him, very often contemporary physicists draw important consequences about the physical world by relying on *purely formal* mathematical considerations, or “analogies”, which seem not to be in any sense rooted in the *content* of the mathematical representations. In this sense, the applicability of mathematics turns out to be “magic” or—as Wigner (1960) would have put it—“miraculous”. Steiner himself justifies the appropriateness of the word “magic” in this context:

Expecting the forms of our notation to mirror those of (even) the atomic world is like expecting the rules of chess to reflect those of the solar system. I shall argue, though, that some of the greatest discoveries of our century were made by studying the symmetries of notation. Expecting this to be any use is like expecting magic to work. (Steiner 1998, p. 72)

The philosophical problem I have sketched in this post can be summed up as follows: Where does this heuristic effectiveness of mathematics come from? How can a mathematical structure disclose such a heuristic potential? Under which conditions a mathematical structure can reveal its heuristic effectiveness? And finally: Since not all mathematical structures seem to have such a heuristic effectiveness, how can we distinguish between heuristically fruitful mathematical representations and heuristically fruitless ones?

In my article *Avoiding Reification* I have analysed the prediction of the omega minus particle, I have addressed all these questions, and I have suggested an answer to them. The interested reader can give a look to this article, as well as to all the articles and books quoted in this post.

[1] ⇑ Ginammi, Michele (2016), “Avoiding reification: Heuristic effectiveness of mathematics and the prediction of the omega minus particle”, *Studies in History and Philosophy of Modern Physics*, vol. 53, February, pp. 20-27.

[2] ⇑ Hempel, Carl G. (1965), *Aspects of Scientific Explanations*, Free Press, New York.

[3] ⇑ Another example of this kind of predictions is Pauli’s prediction of the neutrino. Also in this case we have an anomaly; the new entity is postulated just in order to explain away the anomaly; and mathematics is used in order to derive the appropriate characteristics of this new entity (appropriate to deduce—together with the proper laws of nature—the behaviour that was previously puzzling).

[4] ⇑ Ne’man, Yuval. and Kirsh, Yoram (1996), *The Particle Hunters , *Cambridge University Press, Cambridge.

[5] ⇑ Other examples of this kind of predictions are Dirac’s prediction of the so-called positron, or Mendeleev’s prediction of new chemical elements on the base of the periodic table. A more recent example is the prediction of the Higgs boson—the so-called “God’s particle” (I must admit, however, that I do not know much about this particular case, therefore I could be wrong on this point). All these cases share the fact that the prediction seems to be justified by purely mathematical considerations.

[6] ⇑ Steiner, Mark (1998),

]]>

The article is already accessible online at this address. By clicking on this link until February 23, 2016, you will be taken to the final version of my paper on ScienceDirect ** for free**! No sign up or registration!

In this article I have discussed and critically examined a very interesting case of existential prediction in particle physics: the prediction of the particle (a particle of the class of the spin- baryons). Existential predictions in science are always very thrilling, as you may imagine; but this prediction is even more interesting than usual *because of the peculiar role that mathematics seems to play*. Such a peculiar role raises a serious philosophical problem, since apparently we cannot justify it on the basis of standard methodological criteria. In this paper I discuss this problem and I offer a solution to it by offering a new logical reconstruction of the prediction of the particle, based on the representative and heuristic effectiveness that mathematics may exhibit under certain conditions.

Here is the abstract of the paper, just to give you an idea of the content:

According to Steiner (1998), in contemporary physics new important discoveries are often obtained by means of strategies which rely on

purely formalmathematical considerations. In such discoveries, mathematics seems to have a peculiar and controversial role, which apparently cannot be accounted for by means of standard methodological criteria. M. Gell-Mann and Y. Ne׳eman׳s prediction of the particle is usually considered a typical example of application of this kind of strategy. According to Bangu (2008), this prediction is apparently based on the employment of a highly controversial principle—what he calls the “reification principle”. Bangu himself takes this principle to be methodologically unjustifiable, but still indispensable to make the prediction logically sound. In the present paper I will offer a new reconstruction of the reasoning that led to this prediction. By means of this reconstruction, I will show that we do not need to postulate any “reificatory” role of mathematics in contemporary physics and I will contextually clarify the representative and heuristic role of mathematics in science.

Good read and happy new year to everybody!

]]>

Here you can find the article. Good reading!

]]>

*This book — *it is written in the back cover* — brings together young researchers from a variety of fields within mathematics, philosophy and logic. It discusses questions that arise in their work, as well as themes and reactions that appear to be similar in different contexts. The book shows that a fairly intensive activity in the philosophy of mathematics is underway, due on the one hand to the disillusionment with respect to traditional answers, on the other to exciting new features of present day mathematics. The book explains how the problem of applicability once again plays a central role in the development of mathematics. It examines how new languages different from the logical ones (mostly figural), are recognized as valid and experimented with and how unifying concepts (structure, category, set) are in competition for those who look at this form of unification. It further shows that traditional philosophies, such as constructivism, while still lively, are no longer only philosophies, but guidelines for research. Finally, the book demonstrates that the search for and validation of new axioms is analyzed with a blend of mathematical historical, philosophical, psychological considerations.*

Let me express my gratitude to Gabriele Lolli, Marco Panza and Giorgio Venturi, whose initiative and perseverance made this work possible.

]]>

Something more imaginative can be found in Fritjof Capra’s *The Tao of Physics*, where he suggests the following analogy. Let’s say that you have an orange, and imagine that such an orange has grown up so much that now it is as big as the earth. Now your orange’s atoms would be as big as normal cherries. Notwithstanding, the *nucleus *of such a cherry-like atom would be still invisible to our eyes. In order to see it, the atom should be as big as the dome of Saint Peter’s Basilica in Rome, but even thus the nucleus would not be bigger than a grain of salt!

Switching to protons, things become way more difficult! Maybe, as Bill Bryson suggests in his *A Short History of Nearly Everything*,

No matter how hard you try you will never be able to grasp just how tiny, how spatially unassuming, is a proton. It is just way too small.

But then, fortunately, he depicts the first of a long list of vivid representations, which go through the whole book:

Protons are so small that a little dib of ink like the dot on this ‘i’ can hold something in the region of 5,000,000,000,000 of them, or rather more than the number of seconds it takes to make half a million years.

These verbal representations are great, of course. They stimulate our curiosity, strike our imagination and communicate a childlike sense of stupor which is probably one of the main sources of our knowledge. However, they cannot compete with this last, wonderful, pictorial representation of the scale of the universe. It’s simply amazing, you have to see it!

Enjoy the viewing!

]]>

The FilMat — of which I am proud to be a member — was created few months ago by a group of Italian scholars in philosophy of mathematics , originally met at the Scuola Normale Superiore in Pisa at the 2012 conference “Philosophy of Mathematics: from Logic to Practice”, and aims to foster the gathering of scholars working either in Italy or abroad on the philosophy of mathematics and strictly related fields, with special attention to those at early stages of their careers.

Here you can find the FilMat’s website. Here is the link to the call for abstracts for the 2014 international conference.

]]>

The argument is quite simple: if numbers were sets, we should be able to find a unique progression of sets with which numbers can be identified. But this is apparently impossible: there is a lot of ω-series that can serve as well for the aim. For example, we can adopt von Neumann’s series, and say that , and so on, where the successor function is defined by . Or we can adopt Zermelo’s series, and say that , and so on, where the successor function is defined by . Now, the problem is: is or is ? Benacerraf presents then the example of two children, Ernie and John. The first learned that von Neumann’s ordinals are the natural numbers, while the latter that Zermelo’s ordinals are the natural numbers. Now, they will be easily able to learn arithmetic set theoretically via the above constructions, and they will agree on any arithmetical theorem, except that for Ernie it is true that , while for John it is false!

It can sound quite odd to ask whether a number *belongs* or not to another number. It is actually not an *arithmetical* question. But this is the point: Ernie and John agree on every *arithmetical* question; they disagree only on non-arithmetical issues, but these issues cannot be considered as essential in order to point out the *metaphysical *status of numbers. We can do arithmetic either with Zermelo’s ordinals or von Neumann’s ordinals; but if we are going to metaphysically identify numbers with sets, we must choose one of the two. We cannot admit that . But how can we choose? According to Benacerraf, we *cannot* choose, since there is no *arithmetical* reason to prefer one series on the other. Thus, for we can’t have more than one set of natural numbers, we must admit that *numbers are not sets*.

But then? What are numbers? Benacerraf’s solution is that we have to move from objects to structures. What permits to Ernie and John to agree on arithmetical theorems is not the nature of any single number, but the fact that they are considering two different instantiations of the very same structure — so we can say that numbers are anything that has the right kind of structure.

Benacerraf’s article has been variously discussed over the years but there have been very few attempts to directly challenge his argument. One of the most interesting of these attempts can be found in Eric Steinhart’s 2002 article “Why Numbers are Sets”.^{2} In this article, Steinhart argues that we actually have reasons to prefer von Neumann’s ordinals over Zermelo’s ordinals (and also over any other alternative solution), and hence we can overcome Benacerraf’s worries and say that yes, numbers are really (metaphysically) sets. Benacerraf poses two conditions on a set-theoretic structure in order to be a good candidate for being the natural numbers:^{3}

**arithmetical condition (AC)**: satisfies the AC*iff*is a model of the Dedekind-Peano Axioms;

**cardinal condition (CC)**: satisfies the CC*iff*it identifies the numerical ‘less than’ relation with the set-theoretic relation such that the cardinality of a set is*iff*there is a 1-1 correspondence between and .

Taken together, AC and CC constitutes the “Natural Number condition” (NNC). According to Benacerraf, NNC suffices to define the natural numbers, so that any other condition we want to eventually add is just over the necessary — that’s what makes the choice between Zermelo’s and von Neumann’s ordinals impossible. Steinhart, on the contrary, claims that actually

there is one set of sets that stands out very clearly for the mathematicians as the natural numbers. The mathematicians standardly identify the natural numbers with the finite von Neumann ordinals. They make the identification because not all apparent reductions satisfy the NN-conditions equally well. (p. 345)

Five reasons, according to Steinhart, justify this preference:

- the set of von Neumann ordinals is recursively defined;
- its sets uniquely satisfies certain ordering conditions;
- it is uniformly extendible to the transfinite ;
- it is a minimal -series;
- its -th member is the set of all less than .

1-5 actually specifies five further conditions, beyond NNC, that has to satisfy in order to be the natural numbers. However, one might still hold that, even if 1-5 justify the mathematicians’ preference for von Neumann ordinals, they still don’t compel us to admit that numbers have to be *metaphysically* identified with this set of ordinals. Indeed, 1-5 seem to be rather *stylistical *reasons and seem not to have a metaphysical relevance for our choice. Yet, what is interesting in Steinhart’s article is a mathematical proof he gives to convince us that actually natural numbers are von Neumann ordinals. The proof runs as follows.

CC implies the following axiom (C1) and definition (C2):

**C1**: for all , if is in , then there exists a set ;

**C2**: the cardinality of any set is*iff*there exists some 1-1 correspondence between and .

Now, let’s suppose we choose an to serve as our natural numbers. Because of CC, we must admit that, for each -number , the set of -numbers less than is in the NN-universe (i.e., it exists). Thus, if we assume that is the natural numbers, we must admit that our NN-universe contains the cardinality sets and so on. Hence, the NNC entails that is in the NN-universe. Similarly, we can define and , and show that they are all in our NN-universe. So, we can prove that if satisfies the NNC, then is in the NN-universe and satisfies the NNC. Therefore, if , then satisfies the NNC; but if satisfies the NNC, then satisfies the NNC, and hence we can say that . It follows that . This means that , and therefore that for any in . But if for any in , then and , and these are precisely the von Neumann ordinals. Then, if is the natural numbers, then is the von Neumann finite ordinals.

The argument is very interesting, but I think it is not valid. I am not completely sure about it, so I will present my objection very cautiously. The key point in Steinhart argument consists in the fact that the C1 imposes the existence, for any in , of the set . According to Steinhart,

If the NN-conditions assert some rule , then the NN-universe contains the domain of , the range of , the extension of , and nothing else. For if we cannot reason to the existence of these objects in the NN-universe, then that rule is meaningless (it plays no role in determining the models of the NN-conditions). (p.353)

This is what warrants Steinhart in saying that is in the NN-universe, and that’s precisely what I can’t understand. It seems to me that there is no need to say that the NN-universe has to contain all this stuff. Of course, the domain, the range and the extension of must be contained in the universe* of our set theory *(the set theory in which we carry on the reduction), but there’s no need to say that they have to be contained in the NN-universe. Now, if I am right in noticing this, then we are no longer entitled to say that is in the NN-universe. It is in the universe of our set theory, and, since it satisfies NNC, this means that we have a different way to identify numbers with sets; but we can no longer conclude that , and hence the proof is no longer valid. Simply, implies the existence *in our set-theory universe* of , which is needed in order to say that the cardinality of a set is *iff * can be put in a 1-1 correspondence with the set . It remains a matter of style whether we want adopt this or that progression of sets, but nothing compels us to say that if we can identify numbers with sets, then we cannot identify numbers with anything but von Neumann finite ordinals.

A further consideration can be made, as a conclusion of this post. If I am right in rejecting Steinhart’s proof (and I am not sure I am right in doing this — I restate), then the only way to reject Benacerraf’s conclusion is to hold a strong version of mathematical naturalism: numbers are von Neumann finite ordinals because* that’s what working mathematicians do*. And, viceversa, if we agree with Benacerraf, we cannot stick ourselves to a too much strong version of mathematical naturalism.

[1] ⇑ P. Benacerraf (1965), “What Numbers Could Not Be”, *The Philosophical Review*, vol. 74(1), pp. 47-73. Reprinted in P. Benacerraf and H. Putnam (eds) (1984), *Philosophy of Mathematics*, Cambridge University Press, New York, pp. 272-295.

[2] ⇑ E. Steinhart (2002), “Why Numbers Are Sets”, *Synthese*, vol. 133(3), pp. 343-361.

[3] ⇑ is a set of sets, is a one-to-one function from to , is a particular set belonging to , is a set-theoretic relation. The idea is that if we identify with the natural numbers, we have that is the set of natural numbers, is the successor function, is the initial number and is the ‘less than’ relation.

]]>

In 1946, Paul Artur Schilpp asked Kurt Gödel to write a paper to be included in the collective volume on Albert Einstein for the “Library of Living Philosophers” collection (edited by Schilpp himself). As shown by a letter from Schilpp to Gödel dated July 10th 1946, Gödel himself should have had to propose an argument for the paper, but he never replied. Schilpp, then, suggested he write an article on the topic: “The realistic standpoint in physics and mathematics”. But Gödel was hesitant: he thought himself to be insufficiently expert on the topic to worthily contribute. At the most, he considered himself able to contribute some considerations on the notion of time as resulting from Einstein’s theory of relativity and on the relationship between this and the idealistic thesis of the non-existence of objective time. Schilpp was enthusiastic and thus, three years later (in 1949), a short article was published under the title “A remark about the relationship between relativity theory and idealistic philosophy”^{1}

The article turned out to be very interesting, not least because it is the only place in Gödel’s* *published works where he takes a stand on a general philosophical problem not strictly related to mathematics. Gödel argues that,

Following up the consequences [of the relativity theory, particularly of the general one] […] one obtains an unequivocal proof for the view of those philosophers who, like Parmenides, Kant, and the modern idealists, deny the objectivity of change and consider change as an illusion or an appearance due to our special mode of perception. (p. 202)

The argument is actually quite simple. The theory of relativity upsets all our traditional intuitions concerning time: we are used thinking of time as an objective succession of instants and this succession is intuitively considered as the same for all the observers; but the (special) theory of relativity made us aware that two events A and B, that are *simultaneous* for a certain observer x, can be* non-simultaneous* for a different observer y. It can even happen that A follows B according to x *and* that B follows A according to y. So, we must admit, simultaneity is not absolute, but rather *relative *(to the observer). But if relativity theory leads us to the conclusion that simultaneity is relative, then we can no longer think of reality as composed of an *objective* succession of temporal instants. And if temporal instants do not objectively* *exist, then we must conclude that even the change cannot exist, since it can take place only within them. However, this argument needs some clarifications in order to be convincing — and here is where Gödel’s genius comes to the fore.

The first problem is that saying that temporal instants (or intervals) are relative does not necessarily mean that they are not *objective*. For example, the relation “to be to the left of” is relative, since its validity depends on the position of the observer; but it still expresses an objective disposition in the world. However, what we usually call “time” is something very different from what comes out from this relativization. For we usually think that what exists, objectively exists *within* temporal intervals that are objective and exact as well. To relativize these time intervals would mean to relativize what they contain.

A relative lapse of time […], if any meaning at all can be given to this phrase, would certainly be something entirely different from the lapse of time in the ordinary sense, which means a change in the existing. The concept of existence, however, cannot be relativized without destroying its meaning completely. (p. 203n)

Moreover, this argument actually shows that time flows differently according to the observers, but one could still say that this flow of time is nonetheless an objective property of reality. However, Gödel replies,

A lapse of time […] which is not a lapse in some definite way seems to me as absurd as a colored object which has no definite colors. But, even if such a thing were conceivable, it would again be something totally different from the intuitive idea of the lapse of time to which the idealistic assertion refers. (p. 203n)

A more serious objection is the following. The complete equivalence of all the observers moving at different (but uniform) velocities is valid only within the *abstract* spatio-temporal framework of the special relativity theory. *De facto*, the existence of matter and the spatio-temporal curvature produced by it destroy the equivalence of all the observers by telling apart some of them apart, i.e. those who follow, in their movement, the average movement of the matter. In all the cosmological solutions known until 1949 the local times of these “privileged” observers could be composed together into a unique global time. Thus, one might consider this global time as the *absolute *time. This absolute time flows objectively, and all the discrepancies between it and the observers’ relative times are to be traced to the effect of the motion relative to the average state of motion of the matter on measuring processes and on physical processes in general.^{2}

Gödel’s reply to this objection represents the original contribution of the author to the debate. The reply is based indeed on Gödel’s discovery of a new cosmological solution to the equations of the general theory of relativity, according to which it is impossible to define an absolute time in the way we have just seen. For, in the universes resulting from this cosmological solution, the local times of the “privileged” observers cannot be composed together into a unique, global, absolute time. And this is not all. It is also impossible to define any other procedure able to define an absolute time reasonably similar to our intuitive notion of what such an absolute time should be.

What makes this operation impossible in such universes is the fact that

the compass of inertia in them everywhere rotates [in the same direction] relative to matter, which in our world would mean that it rotates relative to the totality of galactic systems. (p. 204n)

From this comes the name “rotating universes” by which we usually refer to them. If we impose on such universes the characteristic of being *static* and spatially homogeneous, and if we bestow a value >0 on the cosmological constant, then we obtain a universe in which time lines are circular and closed. As a consequence of this, we have it that

by making a round trip on a rocket ship in a sufficiently wide curve, it is possible in these worlds to travel into any region of the past, present, and future, and back again, exactly as it is possible in other worlds to travel to distant parts of space. (p. 205)

Thus, we have that in these universes, for any possible definition of global absolute time, one could travel in regions of the universe belonging to the *past *of that definition — and to the *future * of the observer. But, Gödel concludes,

if the experience of the lapse of time can exist without any objective lapse of time, no reason can be given why an obctive lapse of time should be assumed at all. (p. 206)

We *subjectively* experience time flowing (and hence a change within it), but to this subjective experience does not (*cannot*, given the general relativity) correspond any *objective *temporal order.

This reply refers to one *possible *cosmological solution. We are not sure that *this* solution really describes how our universe is. So, one might reply, it is true that in those rotating universes it is not possible to define an absolute time, but this does not imply that it is not possible to do so in *our *universe, since our universe could be a *non-*rotating universe. However, Gödel notices that

The mere compatibility with the laws of nature of worlds in which there is no distinguished absolute time, and [[in which]], therefore, no objective lapse of time can exist, throws some light on the meaning of time also in those worlds in which an absolute time

canbe defined. For, if someone asserts that that this absolute time is lapsing, he accepts as a consequence that whether or not an objective lapse of time exists (i.e., whether or not a time in the ordinary sense of the word exists) depends on the particular way in which matter and its motion are arranged in the world. (p. 207)

In other words, if something like an absolute time existed (and if we admit that such a concept must preserve something of the original intuition we have of it), it should be valid in *all *the possible universes. But if it does not happen, then a philosophical view should account for such an anomaly — and this does not seem to be very easy to achieve.^{3}

There is a second objection to Gödel’s reply. We previously assumed that these rotating universes are *static*. Now, a static solution hardly can be considered a proper description of our universe, since in these universes it seems impossible to account for the so-called *red shift*. However, rotating solutions can be found for expanding (non-static) universes too, and in these solutions it *can* be impossible to define a notion of absolute time. But this needs a clarification of what Gödel means by “absolute time”. This clarification is offered by Gödel in a footnote: in such universes absolute time could even not exist,

At least if it required that successive experiences of one observer should never be simultaneous in the absolute time or (which is equivalent) that the absolute time should agree in direction with the times of all possible observers. Without this requirement an absolute time always exists in an expanding (and homogeneous) world. (p. 206n)

On the occasion of the second German edition (1955), Gödel clarifies further the point, by adding that

By an “absolute time” I understand a world time that can be defined without reference to particular objects and that satisfies the requirement formulated at the beginning of this footnote. More precisely, this should be called a “possible absolute time”, since several can exist within

oneworld, even though that is only exceptionally the case in spatially homogeneous universes. (p. 206n)

Thus, summing up: relativity theory forces us to abandon the idea of time flowing equally for all observers. Different observers, different times. However, it still seems to be possible to accept a notion of absolute time as something that flows in the same direction for all observers. On the contrary, Gödel’s discovery of the rotating universes shows that in some universes (that *could *coincide with our universe) it is at least possible to *halt *the time flowing (or even invert it). If we admit that the notion of absolute time cannot undergo such a twisting, then we must abandon such a notion, and accept the idealistic idea according to which time is just a product of our subjectivity.

The (theoretical) possibility, in Gödel’s rotating universes, to travel through time stimulated several responses. Many authors focused on the theme, some of them criticizing Gödel for having credited such an absurd idea, some of them believing that Gödel’s discovery could have interesting consequences for our theory of time. However, it must be noted that Gödel himself excluded, in the article we are considering, the possibility of time travel, on the basis of its* physical *impossibility. In a footnote he writes:

Basing the calculation on a mean density of matter equal to that observed in our world, and assuming one were able to transform matter completely into energy, the weight of the “fuel” of the rocket ship, in order to complete the voyage in years (as measured by the travellers), would have to be of the order of magnitude of times the weight of the ship (if stopping, too, is effected by recoil). This estimate applies to . Irrespective of the value of , the velocity of the ship must be at least of the velocity of light. (p. 205n)

Time travels would be contradictory (and hence would exclude the plausibility of such universes) only if a time travel were *practically* feasible. But since at the moment it seems to be impossible (and what today seems to be just a physical impossibility could be a theoretical impossibility tomorrow), such an objection cannot exclude, *a priori*, that the space-time structure of our universe is actually of this kind.

Therefore, Gödel’s argument is not founded, in any sense, on the possibility of time travel, but only on the impossibility of defining, in such rotating universes, an absolute time; and this only depends on the existence, in these rotating universes, of closed time-lines.

Moreover, as correctly pointed out by Yourgrau,^{4} one cannot claim, at the same time, that time travel is (even only theoretically) possible *and *that time is nothing but an illusion!

What must be admitted, of course, — Yourgrau writes — is that Gödel believes he has shown the compatibility with the GTR of universes permitting time travel […]. But it is this very fact that Gödel takes to indicate that , the standard variable for time, should not be read here as standing for genuine, successive

time.But if there is no genuine time, there can be no genuine time travel. […] Gödel describes the R-universe as permitting time travel, but only if we do not read “time” as denoting a relativistic formal simulacrum of the real thing. (pp. 3-4)

[1] ⇑ The article can be found in Kurt Gödel, *Collective Works*, vol. 2: *Publications 1938-1974, *Oxford University Press, New York-Oxford, 1990; pp. 199-207. The shortness of the article doesn’t do justice to the complexity of its drafting. Actually, five different manuscripts were written before the final version. Two of these five manuscripts can be found in the third volume of the *Collective Works.*

[2] ⇑ It was just by reasoning on these considerations that the physicists James Jeans concluded that there is no reason to abandon the intuitive idea of an absolute time which flows objectively.

[3] ⇑ One could exclude the rotating solutions just because they don’t permit us to define an absolute time, but this would be an arbitrary and *ad hoc* solution.

[4] ⇑ P. Yourgrau, *The Disappearance of Time. Kurt Gödel and the Idealistic Tradition in Philosophy*, Cambridge University Press, Cambridge 1991.

]]>