categories - Category Theory list
 help / color / mirror / Atom feed
* Re: Dedekind versus Cauchy reals
@ 2009-02-13  4:39 Toby Bartels
  0 siblings, 0 replies; 12+ messages in thread
From: Toby Bartels @ 2009-02-13  4:39 UTC (permalink / raw)
  To: Categories list

Paul Taylor wrote in part:

>Toby Bartels said that the "UNDERLYING-SET AXIOM" in ASD is impredicative.
>This is correct, but I introduced this axiom as a "straw man".
>First, in order to say exactly what is needed to make ASD equivalent
>to (locally compact locales over) an elementary topos.

OK, that's pretty much what I thought.

>Second, to make the point that a great deal of mainstream mathematics
>is computable and does NOT require underlying sets.  I see the main
>thread of ASD as being about a computable theory.

I hope you won't mind if I take this opportunity to ask something
that I've been idly wondering about ASD lately,
and which is related to this thread (even the Cantor set).
That is: to what extent does ASD have inductive and coinductive objects;
in other words: to what extent do polynomial functors
have initial algebras and final (terminal) coalgebras?

Of course, N is put in by hand as the initial algebra of X |-> 1 + X.
And I know that you've constructed Cantor space as 2^N
(through a bit of argument since exponentials don't always exist)
and the proof that this is the final coalgebra of X |-> 2 x X goes through.
But the final coalgebra of X |-> N x X would be Baire space,
which is not locally compact, so we don't expect that to work.

My intuition is that polynomials with overt discrete coefficients
should have overt discrete initial algebras,
while those with compact Hausdorff coefficients
should have compact Hausdorff final coalgebras.
Have you any thoughts about that question?

>[Toby] has said, for example
>* [...]
>* Similarly, if you remove identity types from intensional type theory
>  (so breaking the justification of countable choice), you can still
>  justify the fullness axiom and so construct the Dedekind reals
>  (albeit not literally as sets of rational numbers).
>* [...]
>* But the Dedekind reals, as far as [he] can tell, do not; excluded
>  middle, power sets, fullness, or countable choice would each do the
>  job, but you need ~something~.

>I would challenge someone to consider the last of these questions
>seriously, maybe taking a hint from the second.

I'd be interested to hear if anybody has done ~any~ work on the second:
intensional type theory without identity types.
It seems to me to work if done in the style of Thierry Coquand
(with inductive types and dependent products as the main constructions)
but not if done in the style of Per Martin-Löf
(with W-types and dependent and finitary products and sums).

I should probably say something about what categories
would serve as syntactic models of such a type theory,
but I never fully worked that out.


--Toby




^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Dedekind versus Cauchy reals
@ 2009-02-23  9:03 Michael Fourman
  0 siblings, 0 replies; 12+ messages in thread
From: Michael Fourman @ 2009-02-23  9:03 UTC (permalink / raw)
  To: Paul Taylor, categories


On 22 Feb 2009, at 14:09, Paul Taylor wrote:

> don't want to give all our ideas away before we have
> made good use of them ourselves

Others who adopted this approach, long ago, have since become
frustrated that they get no credit for ideas that (yet) others have
independently (re)invented.

Publish or perish!

Professor Michael Fourman FBCS CITP
Head of School
Informatics Forum
10 Crichton Street
Edinburgh
EH8 9AB
http://homepages.inf.ed.ac.uk/mfourman/
For diary appointments contact :
ajudd(at)ed-dot-ac-dot-uk
+44 131 650 2690





^ permalink raw reply	[flat|nested] 12+ messages in thread

* Dedekind versus Cauchy reals
@ 2009-02-22 14:09 Paul Taylor
  0 siblings, 0 replies; 12+ messages in thread
From: Paul Taylor @ 2009-02-22 14:09 UTC (permalink / raw)
  To: Categories list

Thomas Streicher asked about the "operational semantics" of the ASD
calculus, and in particular computation with Dedekind cuts.  As he
says, the public documentation is rather limited at the moment, but
Andrej and I don't want to give all our ideas away before we have
made good use of them ourselves.

The first thing that one should say about "operational semantics"
is that mathematical formulae do NOT come with a prescibed way of
computing them.  A large part of what makes mathematics interesting
is that there may be many ways of looking at a problem, besides the
"obvious" one.  The two situations in which notation does come with a
preferred way of computing with it are long multiplication and lambda
calculus, but even these cases are open to INGENUITY in finding better
algorithms.

> I was aware of the fact that locatedness is part of definition of
> Dedekind cut it guarantees. It guarantees that there are close enough
> approximations for any required degree of precision.

Yes. In fact, there are two notions of locatedness, and this is the
stronger one, which is necessary for arithemetic.   There are some
comments linking these ideas to the Archimedean axiom and the Conway
numbers in Sections 11 and 12 of "The Dedekind reals in ASD".

> when given a Dedekind cut as a mapping  c : Q -> 2_\bot and an
> accuracy eps then
>
>      \exists p,q : Q  c(p) = 0 /\ c(q) = 1 /\ q-p < eps
>
> is true and computing this truth value gives the witnesses p and q.

Yes, that's right, except that  "a mapping  c : Q -> 2_\bot" is not
a very helpful way of formulating it.   When computing a real number,
what we have (going back very clearly to Archimedes) is a pair of
logical formulae, each with a single free variable (p and q in your
notation, d and u in ours).  One of which says when p is less than
the real number being computed, and the other when q is greater.
In order to fix the precision, we add another constraint that
squeezes them close together.

Then, as Thomas rightly says,
> one evaluates terms of type \Sigma of the ASD language in a way
> reminiscent of logic programming.

He claimed that this
> doesn't answer the question what is a computationally given real
> number.

Yes, it does, exactly.  Notice that the "real number" is only on
the "outside" of the computation,  and is only manifested when we want
to print it out --- hence my earlier pun that it is "peripheral".

The "guts" of the computation are with LOGICAL formulae.

> The impression I got is that he is using (kind of) interval
> arithmetic for determining truth values

Yes, but interval arithmetic is only one of a variety of tools.

Also, if you only think of intervals as a way of computing real numbers
to a given precision, you miss their real power.  Plainly, these
intervals just get wider and wider.

Really, one should think of interval analysis as USING CRUDE ARITHMETIC
TO DERIVE STRONG LOGICAL INFORMATION ABOUT FUNCTIONS.

For example,  if, when a function f is applied to an interval [d,u]
using Moore arithmetics, the result   f[d,u]   lies within an interval
(e,t),  which we may determine purely ARITHMETICALLY, then we know
the LOGICAL statement that   All x:[d,u]. e < f(x) < t.

Since  f  is given, not just as a funtion in the set-theoretic sense
of a collection of input--output pairs,  but as a FORMULA in a certain
language,  we may differentiate it formally, and apply interval
arithmetic to that, obtaining even stronger LOGICAL information about
the behaviour of the function.

Examining the language, the real work consists of manipulating
inequalities between polynomials over certain ranges of the variables.
Often we just want to know whether the inequality ALWAYS, SOMETIMES
or NEVER holds.   A simple way to answer this question may just be
Moore-wise evaluation,

Then there is the sitation where the inequality is  f(x,y) > 0 in
a certain rectangle.  The polynomial defines a curve (=0) and two
regions (<0 and >0).  As we increase the precision, the curve
becomes closer to being a straight line, and the main numerical
computation is of the gradient of this line, and of a narrow
rectangle that encloses the curve.

Just as the well known Moore arithmetic provides a way of evaluating
universally quantified statments,  so Kaucher arithmetic with back-
to-front intervals answers existential statements.

I don't want to go into more detail that this, because we want to
get the credit for making this work (although we are keen to invove
more collaborators).   But also, these are just SOME of the things
that might be done,  AFTER ESCAPING FROM THE MENTAL STRAITJACKET of
computation with Cauchy sequences or intervals.

Paul Taylor









^ permalink raw reply	[flat|nested] 12+ messages in thread

* Dedekind versus Cauchy reals
@ 2009-02-16 22:09 Paul Taylor
  0 siblings, 0 replies; 12+ messages in thread
From: Paul Taylor @ 2009-02-16 22:09 UTC (permalink / raw)
  To: Categories list

Thomas Streicher said, in response to Andrej Bauer,
> In case of Cauchy sequences I need the modulus of convergence to know
> when I am close enough. In case of Dedekind reals that's not needed
> PROVIDED the left and the right set are given by enumerations of
> rationals.

You still need a condition with the same objective.  It is given
by LOCATEDNESS:
    for any  d<u, either d is in the lower cut, or u is in the upper cut
This means that, if you so far know that the number is between d0 and
u0,
ie that d0 is in the lower cut and u0 in the upper one, and want another
three decimal (ten binary) places, you need to bisect [d0,u0] 11 times.

> I "just" have to wait till close enough approximations from left
> and right have been enumerated.   But is it right that in your
> implementation the left and right cuts are given by ENUMERATIONS
> of its elements and now via semidecision procedures? I guess so.
> Well, but when considering Dedekind cuts this way it essentially
> boils down to introducing reals as interval nestings (as done in
> High School sometimes).

No -- you're trying to force Dedekind cuts (and real computation in
general) into the straitjacket of Cauchy sequences.   This is part
of the longstanding prejudice in favour of numbers and against logic.

If you GENUINELY take Dedekind cuts on board, you can start thinking
of computation using GEOMETRY and other disciplines.

The fragment of the ASD calculus for R has a syntax with
  - types  N, R, Sigma and Sigma^R
  - arithmetic operations   +  -  *
  - arithmetic relations    <  >  !=
  - the usual stuff on N too
  - logical operations   T   F  /\  \/
  - existential quantifiers  over  N, R, open intervals, closed intervals
    (with real endpoints),  ie with types  Sigma^N-->Sigma  etc
  - universal quantifiers over closed bounded intervals
  - definition by description for N
  - Dedekind cuts for R
  - primitive recursion over N at all types.

The primitive calculation is therefore a LOGICAL question,
  - existentially or universally quantified over some ranges
    of the variables
  - of an arithmetic comparison ( > OR < )
  - of two polynomials.

This suggests using SEMI-ALGEBRAIC GEOMETRY,  although I do not know
enough about this subject to see how to use it.

A quick way that answers a universally quantified question
surprisingly often is just to evaluate the polynomials using
interval (Moore) arithmetic.

Another technique is to split the range of one of the variables.

A much more powerful technique uses the INTERVAL Newton algorithm.
The "classical" Newton algorithm DOUBLES THE NUMBER OF BITS OF
PRECISION at each iteration -- once you have got below the "typical
length" of the problem.  Above this, it behaves chaotically.
The interval Newton algorithm "knows" how to behave -- like
interval halving when the problem is essentially combinatorial,
and like the classical algorithm when we're below the typical length.

Whereas UNIVERSALLY quantified questions can often be answered
using "ordinary" (Moore) interval arithmetic,  EXISTENTIAL ones
are handled in a similar way, but using "back-to-front" intervals,
whose arithmetic was originally formulaed by Edgar Kaucher.

Andrej's program uses forward and backward intervals to evaluate
Dedekind cuts, but does not at the moment use the interval Newton
algorithm,

Vaughan Pratt said, again in response to Andrej,
> If everything can be related to interval arithmetic in one way or
> another, why not take interval arithmetic itself as the gold standard
> for the constructive reals?  The Edalat-Escardo-Potter domain-theoretic
> analysis of interval arithmetic struck me as sufficiently canonical
> that
> I don't understand why all the alternatives aren't being evaluated
> relative to that one.  Are there alternatives that compensate for some
> shortcoming of interval arithmetic as understood through the lens of
> domain theory?

In previous ages, mathematicians of the day "standardised" on unit
fractions or Roman numerals.   At one conference that I attended,
a whole afternoon was given over to a team of interval analysts who
wanted to "standardise" their subject, in the bureaucratic sense of
getting some document accepted by the ISO.   One of the speakers did
some dreadful mathematics that depended heavily on decidable choices,
another presented some "software engineering" that was -- even for
that subject -- dogmatic and outdated, whilst the third gave a (good)
political speech.

If you read what most interval analysts write, it is ad hoc and
wrong-headed.   There is, in particular, a fundamental error in
regarding the interval [d,u] as the closed set of numbers BETWEEN
d and u.   Amongst other things, this makes it impossible to understand
Kaucher arithmetic (back-to-front intervals).   Another widespread
belief is that interval arithmetic is some wonderful new algebraic
system in the tradition of the complex numbers and quaternions.

A much simpler way of understanding the interval [d,u],  which is
both constructive and compatible with Kaucher arithmetic, is as
a DEDEKIND PSEUDO-CUT.  The two halves consist of the numbers that
are SO FAR KNOWN to be respectively less and greater than the number
being calculated.   This representation is much more natural, in
that advancing calculation and the Scott topology are in the same
sense as increasing the sets D and U, whereas the closed set [d,u]
becomes smaller.   Essentially, the two parts D and U then lead
their own separate lives, each belonging to an "extended line"
with the Scott topology.  It's actually very simple.

Paul Taylor

PS.   I have flagged Toby Bartels' question,
> to what extent does ASD have inductive and coinductive objects,
to answer after the current thread about the reals, Cantor space,
the Conway numbers, constructivity and toposes has died down.


PPS.   My website has now gone live on a new server at www.PaulTaylor.EU
whilst the old one is still at www.monad.me.uk.   Please tell me
about missing links and server errors.   The "monad" address will
become a redirection in a couple of weeks' time, and will be
discontinued next year.





^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Dedekind versus Cauchy reals
@ 2009-02-16  7:01 Andrej Bauer
  0 siblings, 0 replies; 12+ messages in thread
From: Andrej Bauer @ 2009-02-16  7:01 UTC (permalink / raw)
  To: Vaughan Pratt, categories

On Sun, Feb 15, 2009 at 7:50 AM, Vaughan Pratt <pratt@cs.stanford.edu> wrote:
> If everything can be related to interval arithmetic in one way or
> another, why not take interval arithmetic itself as the gold standard
> for the constructive reals?  The Edalat-Escardo-Potter domain-theoretic
> analysis of interval arithmetic struck me as sufficiently canonical that
> I don't understand why all the alternatives aren't being evaluated
> relative to that one.  Are there alternatives that compensate for some
> shortcoming of interval arithmetic as understood through the lens of
> domain theory?

I essentially agree with you, namely that interval arithmetic
(actually, the interval domain) plays a fundamental role in the
construction of reals, as well as in practical computation (the
fastest implementations use interval arithmetic on top of
multiple-precision floating point).

A result of Vasco Brattka says that any two implementations of the
structure of reals (arithmetic, abs, semidecidable <, limit operator
for rapid sequences) are computably isomorphic. This can be
interpreted as saying that the domain-theoretic reals are as good as
any other version.

One reason why this is not the end of the story is that we do not
understand what happens at higher types. Vasco tells us that given two
versions of reals, R1 and R2, they are computably isomorphic. But how
about functions R1 -> R1, functionals (R1 -> R1) -> R1 and other
higher types?

For example, if R1 is the domain-theoretic reals (constructed as the
maximal elements of the interval domain) and R2 is the "Cauchy reals"
(represented as streams of digits, including negative digits), then we
know that the hierachies

R1,  R1 -> R1,  (R1 -> R1) -> R1, ...

and

R2,  R2 -> R2,  (R2 -> R2) -> R2, ...

agree in the first three terms, but do not know what happens after
that. This was established by Alex Simpson, Martin Escardo and myself.
Dag Normann has also investigated the two hierarchies.

For the familiar case of natural numbers John Longley has shown  that
"in nature" there are only three versions of the hierachy

N,  N -> N,  (N -> N) -> N, ...

These are the hereditarily effective operators, the Kleene-Kreisel
continuous functionals, and a "mixed" version (the computable
Kleene-Kreisel continuous functionals). We would like to have a result
like that for reals, but we're stuck with the continuous version of
the hierarchy at rank 3.

In terms of implementation, the question is the following: does it
matter whether the reals are implemented transparently (the programmer
has access to their internal representation) or opaquely (reals are
values of an abstract data type whose internal workings cannot be
inspected)? We know that up to types of rank 2 it does not matter.

Until such questions are answered, settling with a single construction
or theory of real number computation is premature.

By the way, can you give a single interesting _total_ functional of
type ((R -> R) -> R) -> R? (Please don't ask me to define
"interesting".)

Andrej




^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Dedekind versus Cauchy reals
@ 2009-02-15  7:43 Ronnie Brown
  0 siblings, 0 replies; 12+ messages in thread
From: Ronnie Brown @ 2009-02-15  7:43 UTC (permalink / raw)
  To: Categories list

I have enjoyed, without entirely following,  the debate on this theme and
would like to  ask some naive questions.

Rather practical toposes are presheaf toposes. Do the Dedekind and Cauchy
reals differ in some of those?

To be more practical again, one use of the reals is to analyse motion, as
Lawvere has said. Motion includes change of data, which is analysed by
statistics. But suppose the data to be analysed has structure, such as a
directed graph. Then perhaps we need the reals in the topos of directed
graphs? I.e. insert the required structure at the very beginning of the
study, instead of post facto. Is the law of large numbers valid in the (??)
probablity theory based on those reals? And which reals?

Idle curiosity I fear!

Ronnie





^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Dedekind versus Cauchy reals
@ 2009-02-15  6:50 Vaughan Pratt
  0 siblings, 0 replies; 12+ messages in thread
From: Vaughan Pratt @ 2009-02-15  6:50 UTC (permalink / raw)
  To: categories list

Andrej Bauer wrote:
> There is
> no need to think of a Dedekind real as a clumsy semidecision
> procedure. Instead, the left and the right cut may be seen as
> instructions for calculating better approximations from existing ones.
> This can be done quite efficiently with Newton's method (the variant
> for interval arithmetic). The computation that comes out then behaves
> very much like an iterative procedure for computing a Cauchy sequence.

If everything can be related to interval arithmetic in one way or
another, why not take interval arithmetic itself as the gold standard
for the constructive reals?  The Edalat-Escardo-Potter domain-theoretic
analysis of interval arithmetic struck me as sufficiently canonical that
I don't understand why all the alternatives aren't being evaluated
relative to that one.  Are there alternatives that compensate for some
shortcoming of interval arithmetic as understood through the lens of
domain theory?

Vaughan Pratt




^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Dedekind versus Cauchy reals
@ 2009-02-13 15:05 gcuri
  0 siblings, 0 replies; 12+ messages in thread
From: gcuri @ 2009-02-13 15:05 UTC (permalink / raw)
  To: categories

Quoting Paul Taylor <pt09@PaulTaylor.EU>:

>
>
> Toby has also said a lot of interesting things about many different
> systems.  These illustrate my point that it is essential to state
> which system, or which notion of "constructivity" you intend.
>
> He has said, for example
> * The Dedekind reals can also be constructed in CZF (Peter Aczel's
>    predicative constructive set theory) even if you remove countable
>    choice, using subset collection aka fullness.
> * Similarly, if you remove identity types from intensional type theory
>    (so breaking the justification of countable choice), you can still
>    justify the fullness axiom and so construct the Dedekind reals
>    (albeit not literally as sets of rational numbers).
> * The Cauchy reals exist an a Pi-W-pretopos (equivalently, in a
>    constructive set or type theory with exponentiation but no power
>    sets or even fullness and with infinity but no countable choice).
> * But the Dedekind reals, as far as [he] can tell, do not; excluded
>    middle, power sets, fullness, or countable choice would each do the
>    job, but you need ~something~.
>
> I would challenge someone to consider the last of these questions
> seriously, maybe taking a hint from the second.   ASD provides another,
> as it uses lambda term over a special object instead of talking about
> "sets" of rational numbers.

A note concerning the possibility of constructing the Dedekind reals without
fullness, etc:
the Dedekind reals can be defined in constructive set theory  even without
excluded middle, power sets, fullness, or countable choice. The point is that
they then form a class and not a set.
Class-sized objects are however the norm in a constructive predicative setting.
For instance, a (non-trivial) locale is carried by a class that is never a set;
in general a formal space or formal topology is a large object too, so that one
often deals with superlarge "categories" with large homsets.


Giovanni Curi



Dipartimento di Matematica Pura ed Applicata
Universita' degli Studi di Padova
www.math.unipd.it




^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Dedekind versus Cauchy reals
@ 2009-02-13 12:28 Thomas Streicher
  0 siblings, 0 replies; 12+ messages in thread
From: Thomas Streicher @ 2009-02-13 12:28 UTC (permalink / raw)
  To: Andrej Bauer, categories

Dear Andrej,

wasn't aware of the fact that insisting on fixed rate of convergence can make
things slower. My argument BTW was not about speed but rather about the ability
to read off the desired result. In case of Cauchy sequences I need the modulus
of convergence to know when I am close enough. In case of Dedekind reals that's
not needed PROVIDED the left and the right set are given by enumerations of
rationals. I "just" have to wait till close enough approximations from left
and right have been enumerated.
But is it right that in your implementation the left and right cuts are given
by ENUMERATIONS of its elements and now via semidecision procedures?
I guess so.
Well, but when considering Dedekind cuts this way it essentially boils down
to introducing reals as interval nestings (as done in High School sometimes).
So really the question is whether Cauchy sequences or interval nestings are
better. The way I interpret your remarks is that interval nesting is better
since it allows one to avoid moduli of continuity which is good since fixed
rate of convergence is a source of inefficiency.

I think even constructivists refuting impredicativity have no problem to
embrace representations by interval nestings. If I am not mistaken one finds
this also in Martin-Loef's 1970 book.

Thomas





^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Dedekind versus Cauchy reals
@ 2009-02-13  0:07 Andrej Bauer
  0 siblings, 0 replies; 12+ messages in thread
From: Andrej Bauer @ 2009-02-13  0:07 UTC (permalink / raw)
  To: categories, Thomas Streicher

This is a reply to Thomas's post about "Dedekind vs. Cauchy". I am too
young to know why Cauchy reals were preferred by the early
constructivists, so I will reply only to remarks that regard
real-number computation, of which I have more experience.

Thomas gives pragmatic reasons for preferring Cauchy sequences,
actually the rapid ones, with rate of convergence 2^-n. Given a rapid
Cauchy sequence we can easily get an approximation to any desired
degree. On the other hand, Thomas says, a Dedekind real is implemented
as a semidecision procedure, which forces us to _guess_ approximations
to the real. And he says this must be done in parallel, which just
makes things worse.

Firstly, I think arguments which are based too heavily on efficiency
are on a shaky ground. C is faster than Haskell, therefore everyone
should use C. Perhaps nobody has rally tried to implement efficient
Dedekind reals so how can we tell? In fact, as far as I know, nobody
has until last year. By the way, Cauchy sequences with a fixed rate of
convergence are a bad idea if you worry about efficiency. As Norbert
Mueller has demonstrated over the years, it is better to allow Cauchy
sequences (actually sequences of eventually shrinking intervals)
_without_ a priori conditions on the rate of convergence. Somewhat
paradoxically, by allowing arbitrarily slow rate of convergence we are
able to write _faster_ real arithmetic. This is a non-obvious fact
which the practitioners have been trying to get across, but it's not
sticking with the theorists for some reason. Thomas's argument about
AC_N being of only moderate help is true, but in a twisted way: in
practice AC_N is just never used explicitly, and moduli of continuity,
even fixed ones like 2^-n, are avoided at all cost.

Secondly, there are other criteria by which we should judge an
implementation. One of them is expressivity and the level of
abstraction. With the Dedekind reals certain concepts are more easily
expressed. For example, if one wants to talk constructively about
compactness of the closed interval, Cauchy reals will not be of much
help (unless you essentially declare [0,1] to be compact), whereas
locale-theoretic constructions in the style of Dedekind will work
better.

Lastly, since my computer believes in number choice, there really is
no dilemma. The Cauchy and Dedekind reals _must_ be equivalent to the
computer. Paul Taylor and I have devised algorithms for efficient,
sequential computation with Dedekind reals which use the equivalence
with Cauchy reals under the hood (at least I think that's what is
going on, but I do not fully understand the whole thing yet). There is
no need to think of a Dedekind real as a clumsy semidecision
procedure. Instead, the left and the right cut may be seen as
instructions for calculating better approximations from existing ones.
This can be done quite efficiently with Newton's method (the variant
for interval arithmetic). The computation that comes out then behaves
very much like an iterative procedure for computing a Cauchy sequence.

Best regards,

Andrej




^ permalink raw reply	[flat|nested] 12+ messages in thread

* Dedekind versus Cauchy reals
@ 2009-02-12 12:54 Paul Taylor
  0 siblings, 0 replies; 12+ messages in thread
From: Paul Taylor @ 2009-02-12 12:54 UTC (permalink / raw)
  To: Categories list

I wanted to reply to what Toby Bartells and Thomas Streicher said
in response to my challenge to say why Cauchy sequences are often
preferred to Dedekind cuts as constructions of the real line.
However, I made the mistake of trying to read through the whole thread,
so I'm afraid this posting has ended up in a somewhat rambling state.
Also, I shall postpone the discussion of why DEDEKIND reals are
good for COMPUTATION to another posting.

NOTE TO ANYONE READING THIS IN AN ARCHIVE:

There have been numerous very interesting contributions to this thread
from Toby Bartels, Giovanni Curi, Peter Johnstone, Dusko Pavlovic,
Vaughan Pratt, Steve Stevenson and others, but many of them have been
posted under Subject: lines including the words
        Kantor dust
This is an excellent thread, though unfortunately on the wrong mailing
list.  But I know that Peter Schuster reads "categories", so maybe he
will chip in from the point of view of Bishop's constructive analysis.


Some quick answers to various people's questions first.

When I have spoken about "CONSTRUCTIONS" of the real line or of Cantor
space, I mean the word in the usual non-formal mathematical sense, not
according to any doctrine of constructivity.

Even in a particular system, I don't think it makes sense to say
whether the "construction" of a SPACE takes finite time.  We can ask
such questions (and, in more detail, about "complexity") for
computations of INTEGERS FROM INTEGERS.  Some people, in particular
some of those who take part in "Computability and Complexity in
Analysis", claim that this question is meaningful for REAL NUMBERS.
But for OBJECTS,  I don't see that it has any clear meaning.

Toby Bartels said that the "UNDERLYING-SET AXIOM" in ASD is
impredicative.
This is correct, but I introduced this axiom as a "straw man".
First, in order to say exactly what is needed to make ASD equivalent
to (locally compact locales over) an elementary topos.
Second, to make the point that a great deal of mainstream mathematics
is computable and does NOT require underlying sets.  I see the main
thread of ASD as being about a computable theory.



Toby has also said a lot of interesting things about many different
systems.  These illustrate my point that it is essential to state
which system, or which notion of "constructivity" you intend.

He has said, for example
* The Dedekind reals can also be constructed in CZF (Peter Aczel's
   predicative constructive set theory) even if you remove countable
   choice, using subset collection aka fullness.
* Similarly, if you remove identity types from intensional type theory
   (so breaking the justification of countable choice), you can still
   justify the fullness axiom and so construct the Dedekind reals
   (albeit not literally as sets of rational numbers).
* The Cauchy reals exist an a Pi-W-pretopos (equivalently, in a
   constructive set or type theory with exponentiation but no power
   sets or even fullness and with infinity but no countable choice).
* But the Dedekind reals, as far as [he] can tell, do not; excluded
   middle, power sets, fullness, or countable choice would each do the
   job, but you need ~something~.

I would challenge someone to consider the last of these questions
seriously, maybe taking a hint from the second.   ASD provides another,
as it uses lambda term over a special object instead of talking about
"sets" of rational numbers.

Peter Johnstone, on the other hand, has referred to the fact that both
constructions exist in the topos-theoretic context, and are in general
different.   I don't want to put words into Peter's mouth, but a lot of
evidence from "ordinary mathematics" says that the Dedekind reals are
preferable.

Giovanni Curi has also described the situation in several settings,
apparently coming down on the side of Dedekind.   (I am citing other
people's postings primarily in order to encourage readers to read
throught the thread.)



The reason why we need to be clear about the different systems is that
the mathematical results are DIFFERENT in them, in INTERESTING ways.
It is very easy to adopt the habit of taking a particular system as
given, and then asserting or denying that some theorem is
"constructive", as a judgement about the THEOREM.  However, they are
typically headline results from mainstream mathematics, so it is the
SYSTEM and not the theorem that is under scrutiny.

In particular, I would like to turn the tables on the predicativists.
A lot of excellent mathematics has been done in Martin-Lof type theory.
More recently, certain categorists have been inspired by this to study
"Pi-W-pretoposes" (which surely deserve a better name).
But these theories pick a few items from the (by Godel's theorem,
inexhaustible) collection of possible type-forming operations.

Two questions naturally arise:
* Why do you consider that the axioms that you have chosen are
legitimate?
* Why stop with them?

For example, the recent debate amongst various constructive
disciplines has shown that it is valuable to set down explicit rules
about WHICH JOINS one may legitimately form in the lattice of open
subspaces of a space:

* In locale theory, one can form "all" of them, or, from a more
   constructive point of view, "enough" of them to obtain right adjoints
   such as Heyting implication and the direct image f_*.

* In ASD, the indexing objects of legitimate joins are called OVERT.
   You can take this as a tautologous definition, in which case the
   notion of "overtness" is variable and might, for example, be extended
   by introducing ORACLES.

* In Formal Topology, the ability to form unions is governed by
   Martin-Lof Type Theory,  wherein the "sets" (over which we may
   form unions) are closed under implication.  (At least, I believe
   that this is the case - no doubt someone will correct me if not.)
   From a computational point of view, this is a questionable thing to
   do (hence my first question).

* Algebraic Set Theory has an axiomatisation based on the properties
   of OPEN MAPS.  The latter are the same as families of overt spaces,
   in the same way that proper maps are families of compact spaces.
   Various workers on AST have augmented these axioms, explicitly
   with the purpose of including different LOGICAL connectives.

So why not consider the TYPE OF DEDEKIND REALS too?

This is a perfectly serious suggestion,  and indeed it is precisely
what Andrej Bauer and I have been doing.   Dedekind completeness is
a rule for INTRODUCING real numbers, given pairs of predicates,
whilst the Heine--Borel theorem introduces quantified predicates.
See
       www.PaulTaylor.EU/ASD/analysis.html
for a summary of this SYNTACTIC theory, along with the papers that
develop it.

I intend to answer Thomas Streicher's "pragmatic" reasons for
computing with Cauchy sequences in a separate posting,  but I'd
like to pick out the "foundational" part here:

 > AC_N of course allows you to magically find a modulus of convergence
 > for every Cauchy sequence. But like Church's Thesis or Brouwer's
 > Theorem this is sort of a Deus ex machina (actually those are 2
 > contradictory dei ex machina!) and thus of moderate help for exact
 > computation on the reals.

(Do you see what I mean by "heresy"?)

The alleged conflict between Church's Thesis and Brouwer's Theorem
(I would prefer to say the Heine--Borel theorem) is in fact exactly the
point at which Andrej Bauer and I began our collaboration, in November
2004.  He asked me whether I believed both of these things, which I
said I did, and he progressively took me through the construction of
the Kleene Tree -- to the point of contradiction.

Of course, I wasn't going to accept that, so we back-tracked, and
in particular I set out what I meant by Church's thesis.  I think
this formulation is in Pino Rosolini's thesis,  and it is proved
for ASD in "Scott domains in ASD" (rejected by CTCS 2002).

There is a map p:N->Sigma^N that is "universal" in the (weak) sense
that, for any map f:N->Sigma^N, there is some (not necessarily unique)
fill-in  q:N->N  with  f=p.q:

                   >  N
                 .    |
               .      |p
            q.        |
           .          |
         .            V
       .       f      V
      N--------->  Sigma^N

I forget how the conversation with Andrej went on from that point,
but shortly afterwards he wrote down the Dedekind cut for the supremum
of a non-empty compact overt subspace of R. Our construction of R took
off from there.



This "conflict between Church's Thesis and Brouwer's Theorem"
illustrates another important principle about the methodology of
mathematics:

COUNTEREXAMPLES CLOSE MINDS

When someone demonstrates a counterexample, what they legitimately
prove is typically
           D, E, F  |-  not G.

Then they assert "not E", after which an entire discipline gets built
on the failure of E.   Cantor's theorem about the non-isomorphism of
a set with its powerset is perhaps the dominant example of this.

This is an "argument by authority", one of the principal fallacies of
logic.

For one thing, D, E, F and G play symmetrical roles in the argument,
so why pick on E as the erroneous assumption?

Eventually, some heretic comes along and points out that this proof
makes other assumptions A, B and C.  By substituting different ideas,
A', B' and C', they manage to show that these and D, E, F and G can
validly co-exist.  However, because of the pernicious influence of the
counterexample on other people's minds, it is very difficult for the
new ideas to get a hearing.


Finally, a footnote about CONWAY'S NUMBERS.

As Peter Johnstone has already pointed out, these include the ordinals.

However, the traditional theory of the ordinals depends VERY HEAVILY
on excluded middle.  The intuitionistic theory is therefore rather
complicated and, in particular, there are several different kinds of
intuitionistic ordinals, for which see
    "Intuitionistic Sets and Ordinals" by Paul Taylor, JSL 61 (1996).
    "Algebraic Set Theory" by Andre Joyal and Ieke Moerdijk,  CUP 1996?
and my web page www.PaulTaylor.EU/ordinals/.

This means that the intuitionistic version of Conway numbers would be
even more complicated, although Frank Rosemeier made a start in
    "A Constructive Approach to Conway's Theory of Games" in
    "Reuniting the Antipodes:  Constructive and Nonstandard Views
    of the Continuum}, Peter Schuster, Ulrich Berger and Horst Osswald
    (eds), Springer-Verlag, 2001.

There are further remarks on the interaction amongst Dedekind
completeness,
the Archimedean property and Conway's numbers at the end of section 12
of "The Dedekind reals in ASD".


Paul Taylor





^ permalink raw reply	[flat|nested] 12+ messages in thread

* Dedekind versus Cauchy reals
@ 2009-02-09 11:49 Thomas Streicher
  0 siblings, 0 replies; 12+ messages in thread
From: Thomas Streicher @ 2009-02-09 11:49 UTC (permalink / raw)
  To: categories

Paul has asked why most constructivists have a preference for Cauchy numerals.
Well, most constructivists haven't since under the widely accepted AC_N they
coincide. This, of course, doesn't deny the fact that in many sheaf toposes
AC_n fails. But from point of view of the BHK interpretation AC_N is most
natural.

But I think the reason for prefering Cauchy over Dedekind has a quite
pragmatic reason. What you are interested in is a Cauchy sequence of reals
with a fixed rate of convergence (ensuring e.g. that the n-th item has distance
< 2^{-n} to the limit). AC_N of course allows you to magically find a modulus
of convergence for every Cauchy sequence. But like Church's Thesis or Brouwer's
Theorem this is sort of a Deus ex machina (actually those are 2 contradictory
dei ex machina!) and thus of moderate help for exact computation on the reals.

If you base your reals on Dedekind's idea you may certainly compute with them
but the result isn't very telling because a computable Dedkind real is a
semidecision procedure telling you whether a given rational interval contains
the (ideal) real under consideration. This doesn't help you at all to compute
the real up to a given precision because you have to first guess the right
interval. Of course, you may consider a partition of some area of guesses
and apply the semidecision procedure to all those intervals in parallel. But
that's very inefficient and can't be implemented within usual sequential
programming languages.

Thomas




^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2009-02-23  9:03 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-02-13  4:39 Dedekind versus Cauchy reals Toby Bartels
  -- strict thread matches above, loose matches on Subject: below --
2009-02-23  9:03 Michael Fourman
2009-02-22 14:09 Paul Taylor
2009-02-16 22:09 Paul Taylor
2009-02-16  7:01 Andrej Bauer
2009-02-15  7:43 Ronnie Brown
2009-02-15  6:50 Vaughan Pratt
2009-02-13 15:05 gcuri
2009-02-13 12:28 Thomas Streicher
2009-02-13  0:07 Andrej Bauer
2009-02-12 12:54 Paul Taylor
2009-02-09 11:49 Thomas Streicher

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).