Just a comment on this:

This is one of the reasons why I have some problems to trust proofs of properties of natural numbers formalized in CIC : Are some people “proving” properties of natural numbers that are already assumed in order to define CIC ?

This is a general foundational problem, which exists even for first-order logic. It is impossible to avoid circularity entirely in foundations.

There are models of ZFC which have a non-standard version of the natural numbers (easy to construct by compactness theorem, if we really believe ZFC is consistent). Let's pick such a model V*, and this non-standard version of the natural numbers N*. Then inside V*, it evaluates to true that ``N* embeds into every other model of PA, and every model of PA which embeds into N* is isomorphic to N*". From outside V*, however, we see that the standard natural numbers, N, embeds into N*, but is not-isomorphic to N*.

What this means is that the statement "I am (isomorphic to) the standard natural numbers" is not absolute over all models of set theory.

But the natural numbers are used to define such primitive notions as "string", "formula", and "language". This means that, depending on what ambient model of set theory we are working in, the "set of first order fomulas in the langauge of (0,1,+,x)" is not absolute. Different ambient universes will disagree over whether something counts as a formula. From an outside perspective, V* has formulas which look to us to be infinite-length. But V* sees them as having finite length, generated by only finitely many applications of logical operations.

For example, there are well-formed terms in V* which look like 1+1+1+...  ...+1+1+1+...  ... +1+1+1+...   ...+1+1+1+1+... ...+1+1+1+1, where there are infinitely many 1's appearing here. This means that if you were to look at the "models of PA" in V*, all of them would be non-standard from our outside perspective, because they would all have to interpret the term 1+1+1+...  ...+1+1+1+...  ... +1+1+1+...   ...+1+1+1+1+... ...+1+1+1+1. So definitions like "the initial model of PA" won't work like we expect in V*.

This means if we hoped to somehow use first-order logic to pin down the natural numbers, we find ourselves not even being able to pin down the language of arithmetic. We cannot define the language of arithmetic without first defining something like the natural numbers, and we cannot define the natural numbers without defining something like the langauge of arithmetic.

How do set theorists fix this in practice? Well, they restrict to focus only on what are known as "transtive" models of set theory. To do this, they essentially need to fix a "standard" model of set theory, V, and then only look at models of set theory which, roughly, agree with V about the element-of relation. But if we were skeptical about fixing a standard model of the natural numbers, we surely should be skeptical about fixing a standard model of set theory, as this is much stronger.

Another fix is to move to second-order logic. But this is basically the same as fixing a standard model V of set theory, which again is much stronger (thus philosophically less justified) than just fixing a standard model of PA.

Whenever you have inductive types, you are of course in the background assuming that you have a standard model N of arithmetic. Because if you don't assume this, then it is ambiguous what the initial model should be, or even what counts as a model at all.

One of the lessons from Godel should be that there is no way of removing this ambiguity which does not itself introduce more ambiguity.

James Moody