I have decided to create a new web-page to collect both my personal page and the present blog. For this reason, I have opened a new website: https://micheleginammi.wordpress.com/blog/. This new website contains my personal page, more info on my research, my CV, and the continuation of this blog.

As a consequence, *The Inductivist Turkey* won’t be updated any more. All the old posts have been moved to the new website, and all the new posts will be published directly there!

Hope to see you again on https://micheleginammi.wordpress.com/!

Best,

Michele

]]>A prominent professor in the philosophy of mathematics once told me that the key to writing an attractive philosophy paper is to present the reader with a puzzle. “Give me a puzzle, and I’ll be interested”, he said. As I was surrounded by mathematicians and philosophers of mathematics which were steadily exchanging puzzles,…]]>

Here is a very interesting post on possible alternatives to Cantor’s Transfinite Numbers Theory! Was Cantor’s notion of infinite really “unavoidable”?

The topic is particularly interesting when considered within the general framework of mathematical development. How does mathematics evolve? What “forces” mathematics to take some roads instead of other roads? And how do these choices influence the way in which we model and understand the natural world?

A prominent professor in the philosophy of mathematics once told me that the key to writing an attractive philosophy paper is to present the reader with a puzzle. “Give me a puzzle, and I’ll be interested”, he said. As I was surrounded by mathematicians and philosophers of mathematics which were steadily exchanging puzzles, I had no doubt that he was right: mathematicians and philosophers of mathematics like puzzles. But then, mightn’t it be the case that this fondness of puzzles influences much more than just our judgment of a philosophy paper (and our conversations over dinner)? Here’s a crazy idea – or maybe not so crazy – does our desire to be puzzled affect our judgement of a certain foundational mathematical theory?

The foundational mathematical theory which I have in mind is, of course, Cantor’s transfinite set theory. Given its general acceptance nowadays, it is easy to forget that in order to generalize arithmetic from the finite to…

View original post 948 more words

Killing Pluto was fun, but this is head and shoulders above everything else. (M. Brown)

It was only yesterday when I published a post on standard and non-standard existential predictions. Thus, I was very happy today when, opening the newspaper, I read that some astronomers have predicted the existence of a new planet in the Solar System! Michael E. Brown and Konstantin Batygin just published a paper on the current number of the Astronomical Journal, in which they argue in favour of the existence of a ninth planet, nearly the size of Neptune, orbiting the sun every 15,000 years. The news is also reported by the magazine *Science. *The existence of this “Planet X” has not been confirmed by any discovery yet, therefore for the moment this prediction is nothing but a simple hypothesis.

Whether or not this prediction will be confirmed, it seems to be analogous to Neptune’s prediction. As well as for Neptune, also the existence of this new planet has been postulated in order to “explain away” an anomaly. In this case, the anomaly concerns a clustering of six previously known objects that orbit beyond Neptune. What is anomalous, here, is the fact that these objects are grouped in a cluster and that their orbits are quite similar. As Brown and Batygin say, <<the perihelion positions and orbital planes of [*these*] objects are tightly confined and […] such a clustering has only a probability of 0.007% to be due to chance, thus requiring a dynamical origin>>. A ninth planet having the proper characteristics would explain away this “anomaly”.

It must be noted, however, that the anomaly that this ninth planet would “explain away” is a bit different from the anomaly that Neptune explained away. In the Neptune case, the anomaly was—so to say—much stronger than the clusters’ anomaly. There, the anomaly was something that the theory *could not account for*; here, the anomaly is something that the theory *could **account for*, but with a very low degree of probability. In other words, in the first case the anomaly *undermines* the theory, whereas in the second case the anomaly is still *compatible* with the theory but the probability that the disposition of these objects orbiting beyond Neptune is simply due to chance is really low. We can then distinguish between two kinds of anomaly—let’s call them “incompatible” and “compatible” anomalies.

This story also has an ironic shade, since Brown is better known as a planet slayer than a planet discoverer! His 2005 discovery of Eris, a remote icy world nearly the same size as Pluto, revealed that what was previously seen as the outermost planet was just one of many worlds in the Kuiper belt. As a consequence of this discovery, astronomers promptly reclassified Pluto as a dwarf planet. The whole story is recounted by Brown in his 2010 book *How I Killed Pluto and Why It Had It Coming*. However, as Brown reportedly commented on this, <<Killing Pluto was fun, but this [*the discovery of a new planet*] is head and shoulders above everything else>>.

Typically, these predictions are made possible by an accurate mathematical representation of the physical system (or phenomenon) at issue. This raises questions like: How can we predict the existence of a new, *concrete* entity on the basis of mathematical, *abstract* considerations? How can mathematics, which can be defined as the study of *possible* structures, say something about elements in *real* structures? How can mathematics, which is developed by mathematicians for* completely different purposes*, turn out to be so effective in representing physical reality and fostering new discoveries?

A classical example of existential prediction is the prediction of the planet Neptune. During the first half of the XIXth century, astronomers noticed several anomalies in the astronomical tables for the planet Uranus (Neptune’s neighbour). Briefly speaking, Uranus was not *exactly* where astronomers expected to find it. The deviation from the expected position was minimal, but enough to raise concerns. There were two possible explanations for this deviation: the first was that the theory (from which the predictions were derived) is wrong; the second was that not the theory, but the *initial conditions* were wrong. Now, the “theory”, in this case, is Newton’s gravitation theory, and nobody at that time could seriously suppose that Newton’s gravitation theory was wrong! Therefore, astronomers opted for the second possibility, that they missed something in the initial conditions of the Solar system. What could they have missed? Well, an analysis of the Uranus tables suggested that probably there was an unknown body that was altering Uranus’ orbit. Thus, they started working on this, and they eventually made a prediction: that there is an eighth planet in the Solar system—Neptune, indeed. In order to account for the previous anomalies, this planet had to be such-and-such, with such-and-such mass, at a such-and such position, with a such-and-such speed, and so on… Finally, in 1846, the new planet was actually observed, and its prediction was thus confirmed.

Now, which role did mathematics play in this prediction? An important role, of course, but not so decisive. Mathematics seems to have been used in this case only to predict the *characteristics* of the new planet: based on accurate calculations, astronomers established that, *in order to explain away the anomalies*, the planet should have had the predicted characteristics. However, mathematics did not really play any role in the prediction; the prediction was suggested by the anomalies. In other words, mathematics did not suggested the existence of anything, it only permitted to* precise the conditions under which the new entity could explain away the anomalies*.

This kind of existential prediction can be easily accounted by the so-called deductive-nomological model (DN hereafter) proposed by Hempel (1965).^{2} According to this model, to *explain* a scientific fact is to derive this fact from a set of laws of nature *plus* some initial conditions. Analogously, we can also *predict* a scientific fact by applying the laws to the proper circumstances (described by the initial conditions). However, when the predictions are not confirmed by experience and we end up with *anomalies* (*i.e.*, when a physical system—previously thought to obey a certain set of laws—appear to behave as if the laws do not apply to it anymore), scientists will try to explain the reasons of this anomalous behaviour. In other words, they will try to “explain away” the anomalies. According to the DN model, this amounts to show how this anomalous behaviour (assuming that the measurements revealing the anomalies are accurate enough) can be derived from a set of laws and initial conditions. They have to options: either they change the laws, or they change the initial conditions. The first option is highly expensive, thus they usually prefer to opt—at least as a first step—for the second one. This option involves a revision of the initial conditions, and this revision may consists in a supplementation of them, thus predicting a new entity. The criteria guiding this implementation are defined by the DN model: the new initial conditions must be such that, by means of them, we must be able to derive (and hence to explain) the behaviour we previously considered anomalous from our set of laws.

We can call “standard” this kind of predictions.^{3} However, not all existential predictions are like planet Neptune’s one. Let’s give a look to a completely different kind of existential predictions—a kind of prediction that *cannot* be accounted by means of DN model.

An example of non-standard prediction is the prediction of the so-called “omega minus particle”, independently predicted by Gell-Mann and Ne’eman in 1962 and then discovered in 1963. We can get a rough—but still precise—comprehension of this prediction by reading the following passage from Ne’eman and Kirsh (1996):^{4}

In 1961 four baryons of spin were known. These were the four resonances , , , which had been discovered by Fermi in 1952. It was not clear that they could not be fitted into an octet, and the eightfold way predicted that they were part of a decuplet or of a family of 27 particles. A decuplet would form a triangle in the [strangeness-isospin] plane, while the 27 particles would be arranged in a large hexagon. (According to the formalism of SU(3), supermultiplets of 1, 8, 10 and 27 particles were allowed.) In the same year (1961) the three resonances were discovered, with strangeness and probable spin , which could fit well either into the decuplet or the 27-member family.

At a conference of particle physics held at CERN, Geneva, in 1962, two new resonances were reported, with strangeness , and the electric charge and (today known as the ). They fitted well into the third course of both schemes (and could thus be predicted to have spin ). On the other hand, Gerson and Shoulamit Goldhaber reported a ‘failure’: in collisions of or with protons and neutrons, one did not find resonances. Such resonances would indeed be expected if the family had 27 members.

The creators of the eightfold way, who attended the conference, felt that this failure clearly pointed out that the solution lay in the decuplet. They saw the pyramid [see fig. above] being completed before their very eyes. Only the apex was missing, and with the aid of the model they had conceived, it was possible to describe exactly what the properties of the missing particle should be! Before the conclusion of the conference Gell-Mann went up to the blackboard and spelled out the anticipated characteristics of the missing particle, which he called ‘omega minus’ (because of its negative charge and because omega is the last letter of the Greek alphabet). He also advised the experimentalists to look for that particle in their accelerators. Yuval Ne’eman had spoken in a similar vein to the Goldhabers the previous evening and had presented them in a written form with an explanation of the theory and the prediction. (pp. 202-203)

In this case, it does not seem that the prediction has been made in order to explain away an anomaly. What should the anomaly be in this case? The fact that there is an empty place in the decuplet scheme cannot be considered an anomaly, because this empty place does not undermine the natural laws at issue. Consider the following hypothetical case. Imagine the prediction of the existence of a tenth spin- baryon turned out to be wrong. This failure could take two different forms:

- we did not find any particle—at all;
- we
*did*find a tenth particle, but this tenth particle had*completely different characteristics*from the ones predicted by the decuplet scheme.

In the second case, we would have a real anomaly, since the measurements cannot be accounted for by our theory. In case (A), instead, the anomaly seems to consist simply in the fact that the symmetry scheme could turn out to have an empty place. But if this were the case, would it be really an anomaly? My answer is: No, it is not! Supposing that experimentalist physicists had not found any new particle corresponding to the characteristics pointed out, should we drop the SU(3) symmetry scheme? This seems unreasonable, for it can still be regarded as a valuable tool for *representing* the class of spin- baryons.

Hence, the fact that the formalism seems to commit us to the existence of an entity that does not exist cannot be regarded as wrong. There are many cases in which a formalism seems to commit us to entities that we do not regard as actually existing, but still we continue to use those formalisms without worrying about these “fictional” entities. We can consider the case of the applicability of analytic functions to thermodynamic: we know we can treat the critical temperature of a ferromagnet as an analytic function of the number of its dimensions. But, since we cannot calculate the problem for the 3-dimensional magnet, we calculate it for a 4-dimensional magnet, then we expand the function as a power series in a complex plane around the number 4, and finally we plug in the value 3. Now, the problem is that in this procedure we may end up, at a certain point, with dimensions like 3.5, or even ! The analytic function is used here just like a formal trick—and it perfectly works! Now, the point is: Should we accept such weird magnets’ dimensions as physically real *just because they appear in the formalism? *Of course not! Should we abandon such a calculational tool just because it seems to commit us to weird dimensions? Not even! We just accept it as a weird consequence of the mathematical trick we are exploiting.

Cases like these—and the hypothetical failure (A) for the omega minus prediction falls within this group — point out that there is an important distinction to be made here about the representative role of mathematics in physics. On a first approximation, we can say that a mathematical structure can play a representative role *without being fully representative*; or, in a slightly different terminology, we can say that a mathematical structure playing a representative role can be either `perfectly fitting’ or `redundant’ (*i.e*., `not perfectly fitting’). In the first case, *every* element in the mathematical structure plays a representative role; in the second, this is not the case. Importantly, the fact that a mathematical structure is `redundant’ does not necessarily undermine its representative effectiveness.

I must say that not everybody would agree with this analysis. Bangu (2008), for example, thinks that in this case *there is* an anomaly to be explained away, and that the anomaly is precisely the empty place in the decuplet scheme. However, even if Bangu and I do not agree on this point, we both agree on the fact that this prediction cannot be accounted in the same way as other “standard” existential prediction. The reason is that, even if you think that in this case *there is* an anomaly to be explained away and that the new particle has been predicted in order to explain away this anomaly, still the way in which this prediction has been made is very peculiar. Indeed, look at how Gell-Mann and Ne’eman predicted the characteristics of this new particle: they did not consider the interaction of the new entity with other particles in the scheme. They simply looked at the scheme and they *extracted* the relevant information out of it! If in the case of the planet Neptune the characteristics of the new entity are derived by considering the *interactions* of the alleged new entity, in this case the procedure is completely different!

What is even more interesting in this case, is the fact that in this prediction mathematics seems to play a very peculiar role. Mathematics is used here to *represent* a certain class of particles, but this representation turns out to have a wonderful *heuristic* potential! Where does this heuristic potential come from? What is really surprising is that it seems that this heuristic potential was already enclosed in the representative effectiveness of the mathematical structure employed. Indeed, the prediction of this new physical entity seems to be motivated *only* by the mathematics employed. Just to be clear: this does not amount to saying that *no empirical fact* played a role in shaping the prediction. What I am stressing here, is that the *justification* for the prediction seems to be purely mathematical—namely, purely based on the mathematical formalism employed.^{5}

This peculiarity of mathematics seems not to be limited to this case or to existential predictions only. In his (1998) famous book,^{6} Mark Steiner argues that the role of mathematics in contemporary physics is really unique. According to him, very often contemporary physicists draw important consequences about the physical world by relying on *purely formal* mathematical considerations, or “analogies”, which seem not to be in any sense rooted in the *content* of the mathematical representations. In this sense, the applicability of mathematics turns out to be “magic” or—as Wigner (1960) would have put it—“miraculous”. Steiner himself justifies the appropriateness of the word “magic” in this context:

Expecting the forms of our notation to mirror those of (even) the atomic world is like expecting the rules of chess to reflect those of the solar system. I shall argue, though, that some of the greatest discoveries of our century were made by studying the symmetries of notation. Expecting this to be any use is like expecting magic to work. (Steiner 1998, p. 72)

The philosophical problem I have sketched in this post can be summed up as follows: Where does this heuristic effectiveness of mathematics come from? How can a mathematical structure disclose such a heuristic potential? Under which conditions a mathematical structure can reveal its heuristic effectiveness? And finally: Since not all mathematical structures seem to have such a heuristic effectiveness, how can we distinguish between heuristically fruitful mathematical representations and heuristically fruitless ones?

In my article *Avoiding Reification* I have analysed the prediction of the omega minus particle, I have addressed all these questions, and I have suggested an answer to them. The interested reader can give a look to this article, as well as to all the articles and books quoted in this post.

[1] ⇑ Ginammi, Michele (2016), “Avoiding reification: Heuristic effectiveness of mathematics and the prediction of the omega minus particle”, *Studies in History and Philosophy of Modern Physics*, vol. 53, February, pp. 20-27.

[2] ⇑ Hempel, Carl G. (1965), *Aspects of Scientific Explanations*, Free Press, New York.

[3] ⇑ Another example of this kind of predictions is Pauli’s prediction of the neutrino. Also in this case we have an anomaly; the new entity is postulated just in order to explain away the anomaly; and mathematics is used in order to derive the appropriate characteristics of this new entity (appropriate to deduce—together with the proper laws of nature—the behaviour that was previously puzzling).

[4] ⇑ Ne’man, Yuval. and Kirsh, Yoram (1996), *The Particle Hunters , *Cambridge University Press, Cambridge.

[5] ⇑ Other examples of this kind of predictions are Dirac’s prediction of the so-called positron, or Mendeleev’s prediction of new chemical elements on the base of the periodic table. A more recent example is the prediction of the Higgs boson—the so-called “God’s particle” (I must admit, however, that I do not know much about this particular case, therefore I could be wrong on this point). All these cases share the fact that the prediction seems to be justified by purely mathematical considerations.

[6] ⇑ Steiner, Mark (1998),

The article is already accessible online at this address. By clicking on this link until February 23, 2016, you will be taken to the final version of my paper on ScienceDirect ** for free**! No sign up or registration!

In this article I have discussed and critically examined a very interesting case of existential prediction in particle physics: the prediction of the particle (a particle of the class of the spin- baryons). Existential predictions in science are always very thrilling, as you may imagine; but this prediction is even more interesting than usual *because of the peculiar role that mathematics seems to play*. Such a peculiar role raises a serious philosophical problem, since apparently we cannot justify it on the basis of standard methodological criteria. In this paper I discuss this problem and I offer a solution to it by offering a new logical reconstruction of the prediction of the particle, based on the representative and heuristic effectiveness that mathematics may exhibit under certain conditions.

Here is the abstract of the paper, just to give you an idea of the content:

According to Steiner (1998), in contemporary physics new important discoveries are often obtained by means of strategies which rely on

purely formalmathematical considerations. In such discoveries, mathematics seems to have a peculiar and controversial role, which apparently cannot be accounted for by means of standard methodological criteria. M. Gell-Mann and Y. Ne׳eman׳s prediction of the particle is usually considered a typical example of application of this kind of strategy. According to Bangu (2008), this prediction is apparently based on the employment of a highly controversial principle—what he calls the “reification principle”. Bangu himself takes this principle to be methodologically unjustifiable, but still indispensable to make the prediction logically sound. In the present paper I will offer a new reconstruction of the reasoning that led to this prediction. By means of this reconstruction, I will show that we do not need to postulate any “reificatory” role of mathematics in contemporary physics and I will contextually clarify the representative and heuristic role of mathematics in science.

Good read and happy new year to everybody!

]]>

Here you can find the article. Good reading!

]]>*This book — *it is written in the back cover* — brings together young researchers from a variety of fields within mathematics, philosophy and logic. It discusses questions that arise in their work, as well as themes and reactions that appear to be similar in different contexts. The book shows that a fairly intensive activity in the philosophy of mathematics is underway, due on the one hand to the disillusionment with respect to traditional answers, on the other to exciting new features of present day mathematics. The book explains how the problem of applicability once again plays a central role in the development of mathematics. It examines how new languages different from the logical ones (mostly figural), are recognized as valid and experimented with and how unifying concepts (structure, category, set) are in competition for those who look at this form of unification. It further shows that traditional philosophies, such as constructivism, while still lively, are no longer only philosophies, but guidelines for research. Finally, the book demonstrates that the search for and validation of new axioms is analyzed with a blend of mathematical historical, philosophical, psychological considerations.*

Let me express my gratitude to Gabriele Lolli, Marco Panza and Giorgio Venturi, whose initiative and perseverance made this work possible.

]]>Something more imaginative can be found in Fritjof Capra’s *The Tao of Physics*, where he suggests the following analogy. Let’s say that you have an orange, and imagine that such an orange has grown up so much that now it is as big as the earth. Now your orange’s atoms would be as big as normal cherries. Notwithstanding, the *nucleus *of such a cherry-like atom would be still invisible to our eyes. In order to see it, the atom should be as big as the dome of Saint Peter’s Basilica in Rome, but even thus the nucleus would not be bigger than a grain of salt!

Switching to protons, things become way more difficult! Maybe, as Bill Bryson suggests in his *A Short History of Nearly Everything*,

No matter how hard you try you will never be able to grasp just how tiny, how spatially unassuming, is a proton. It is just way too small.

But then, fortunately, he depicts the first of a long list of vivid representations, which go through the whole book:

Protons are so small that a little dib of ink like the dot on this ‘i’ can hold something in the region of 5,000,000,000,000 of them, or rather more than the number of seconds it takes to make half a million years.

These verbal representations are great, of course. They stimulate our curiosity, strike our imagination and communicate a childlike sense of stupor which is probably one of the main sources of our knowledge. However, they cannot compete with this last, wonderful, pictorial representation of the scale of the universe. It’s simply amazing, you have to see it!

Enjoy the viewing!

]]>

The FilMat — of which I am proud to be a member — was created few months ago by a group of Italian scholars in philosophy of mathematics , originally met at the Scuola Normale Superiore in Pisa at the 2012 conference “Philosophy of Mathematics: from Logic to Practice”, and aims to foster the gathering of scholars working either in Italy or abroad on the philosophy of mathematics and strictly related fields, with special attention to those at early stages of their careers.

Here you can find the FilMat’s website. Here is the link to the call for abstracts for the 2014 international conference.

]]>The argument is quite simple: if numbers were sets, we should be able to find a unique progression of sets with which numbers can be identified. But this is apparently impossible: there is a lot of ω-series that can serve as well for the aim. For example, we can adopt von Neumann’s series, and say that , and so on, where the successor function is defined by . Or we can adopt Zermelo’s series, and say that , and so on, where the successor function is defined by . Now, the problem is: is or is ? Benacerraf presents then the example of two children, Ernie and John. The first learned that von Neumann’s ordinals are the natural numbers, while the latter that Zermelo’s ordinals are the natural numbers. Now, they will be easily able to learn arithmetic set theoretically via the above constructions, and they will agree on any arithmetical theorem, except that for Ernie it is true that , while for John it is false!

It can sound quite odd to ask whether a number *belongs* or not to another number. It is actually not an *arithmetical* question. But this is the point: Ernie and John agree on every *arithmetical* question; they disagree only on non-arithmetical issues, but these issues cannot be considered as essential in order to point out the *metaphysical *status of numbers. We can do arithmetic either with Zermelo’s ordinals or von Neumann’s ordinals; but if we are going to metaphysically identify numbers with sets, we must choose one of the two. We cannot admit that . But how can we choose? According to Benacerraf, we *cannot* choose, since there is no *arithmetical* reason to prefer one series on the other. Thus, for we can’t have more than one set of natural numbers, we must admit that *numbers are not sets*.

But then? What are numbers? Benacerraf’s solution is that we have to move from objects to structures. What permits to Ernie and John to agree on arithmetical theorems is not the nature of any single number, but the fact that they are considering two different instantiations of the very same structure — so we can say that numbers are anything that has the right kind of structure.

Benacerraf’s article has been variously discussed over the years but there have been very few attempts to directly challenge his argument. One of the most interesting of these attempts can be found in Eric Steinhart’s 2002 article “Why Numbers are Sets”.^{2} In this article, Steinhart argues that we actually have reasons to prefer von Neumann’s ordinals over Zermelo’s ordinals (and also over any other alternative solution), and hence we can overcome Benacerraf’s worries and say that yes, numbers are really (metaphysically) sets. Benacerraf poses two conditions on a set-theoretic structure in order to be a good candidate for being the natural numbers:^{3}

**arithmetical condition (AC)**: satisfies the AC*iff*is a model of the Dedekind-Peano Axioms;

**cardinal condition (CC)**: satisfies the CC*iff*it identifies the numerical ‘less than’ relation with the set-theoretic relation such that the cardinality of a set is*iff*there is a 1-1 correspondence between and .

Taken together, AC and CC constitutes the “Natural Number condition” (NNC). According to Benacerraf, NNC suffices to define the natural numbers, so that any other condition we want to eventually add is just over the necessary — that’s what makes the choice between Zermelo’s and von Neumann’s ordinals impossible. Steinhart, on the contrary, claims that actually

there is one set of sets that stands out very clearly for the mathematicians as the natural numbers. The mathematicians standardly identify the natural numbers with the finite von Neumann ordinals. They make the identification because not all apparent reductions satisfy the NN-conditions equally well. (p. 345)

Five reasons, according to Steinhart, justify this preference:

- the set of von Neumann ordinals is recursively defined;
- its sets uniquely satisfies certain ordering conditions;
- it is uniformly extendible to the transfinite ;
- it is a minimal -series;
- its -th member is the set of all less than .

1-5 actually specifies five further conditions, beyond NNC, that has to satisfy in order to be the natural numbers. However, one might still hold that, even if 1-5 justify the mathematicians’ preference for von Neumann ordinals, they still don’t compel us to admit that numbers have to be *metaphysically* identified with this set of ordinals. Indeed, 1-5 seem to be rather *stylistical *reasons and seem not to have a metaphysical relevance for our choice. Yet, what is interesting in Steinhart’s article is a mathematical proof he gives to convince us that actually natural numbers are von Neumann ordinals. The proof runs as follows.

CC implies the following axiom (C1) and definition (C2):

**C1**: for all , if is in , then there exists a set ;

**C2**: the cardinality of any set is*iff*there exists some 1-1 correspondence between and .

Now, let’s suppose we choose an to serve as our natural numbers. Because of CC, we must admit that, for each -number , the set of -numbers less than is in the NN-universe (i.e., it exists). Thus, if we assume that is the natural numbers, we must admit that our NN-universe contains the cardinality sets and so on. Hence, the NNC entails that is in the NN-universe. Similarly, we can define and , and show that they are all in our NN-universe. So, we can prove that if satisfies the NNC, then is in the NN-universe and satisfies the NNC. Therefore, if , then satisfies the NNC; but if satisfies the NNC, then satisfies the NNC, and hence we can say that . It follows that . This means that , and therefore that for any in . But if for any in , then and , and these are precisely the von Neumann ordinals. Then, if is the natural numbers, then is the von Neumann finite ordinals.

The argument is very interesting, but I think it is not valid. I am not completely sure about it, so I will present my objection very cautiously. The key point in Steinhart argument consists in the fact that the C1 imposes the existence, for any in , of the set . According to Steinhart,

If the NN-conditions assert some rule , then the NN-universe contains the domain of , the range of , the extension of , and nothing else. For if we cannot reason to the existence of these objects in the NN-universe, then that rule is meaningless (it plays no role in determining the models of the NN-conditions). (p.353)

This is what warrants Steinhart in saying that is in the NN-universe, and that’s precisely what I can’t understand. It seems to me that there is no need to say that the NN-universe has to contain all this stuff. Of course, the domain, the range and the extension of must be contained in the universe* of our set theory *(the set theory in which we carry on the reduction), but there’s no need to say that they have to be contained in the NN-universe. Now, if I am right in noticing this, then we are no longer entitled to say that is in the NN-universe. It is in the universe of our set theory, and, since it satisfies NNC, this means that we have a different way to identify numbers with sets; but we can no longer conclude that , and hence the proof is no longer valid. Simply, implies the existence *in our set-theory universe* of , which is needed in order to say that the cardinality of a set is *iff * can be put in a 1-1 correspondence with the set . It remains a matter of style whether we want adopt this or that progression of sets, but nothing compels us to say that if we can identify numbers with sets, then we cannot identify numbers with anything but von Neumann finite ordinals.

A further consideration can be made, as a conclusion of this post. If I am right in rejecting Steinhart’s proof (and I am not sure I am right in doing this — I restate), then the only way to reject Benacerraf’s conclusion is to hold a strong version of mathematical naturalism: numbers are von Neumann finite ordinals because* that’s what working mathematicians do*. And, viceversa, if we agree with Benacerraf, we cannot stick ourselves to a too much strong version of mathematical naturalism.

[1] ⇑ P. Benacerraf (1965), “What Numbers Could Not Be”, *The Philosophical Review*, vol. 74(1), pp. 47-73. Reprinted in P. Benacerraf and H. Putnam (eds) (1984), *Philosophy of Mathematics*, Cambridge University Press, New York, pp. 272-295.

[2] ⇑ E. Steinhart (2002), “Why Numbers Are Sets”, *Synthese*, vol. 133(3), pp. 343-361.

[3] ⇑ is a set of sets, is a one-to-one function from to , is a particular set belonging to , is a set-theoretic relation. The idea is that if we identify with the natural numbers, we have that is the set of natural numbers, is the successor function, is the initial number and is the ‘less than’ relation.