categories - Category Theory list
 help / color / mirror / Atom feed
* Re: coinduction
@ 2000-10-27 20:40 Jesse F. Hughes
  2000-10-28  9:23 ` coinduction: definable equationally? Vaughan Pratt
  2000-10-30  3:38 ` coinduction Dusko Pavlovic
  0 siblings, 2 replies; 14+ messages in thread
From: Jesse F. Hughes @ 2000-10-27 20:40 UTC (permalink / raw)
  To: Categories@Mta. Ca


I'm afraid that I've never quite understood the notion of guarded
equations, so I cannot comment on much of what you wrote below without
some more time and more thought.  Let me make a few introductory
comments and perhaps I can say more later.

Dusko Pavlovic <dusko@kestrel.edu> writes:

> Al Vilcius asks:
> 
> > > can anyone explain coinduction?
> > > in what sense it dual to induction?
> 
> and most people reply along the lines of:
> 
> > The principle of induction says that initial algebras have no
> > non-trivial subalgebras.
> 
> > Coinduction says, dually, that the final coalgebra has no
> non-trivial
> > quotients.
> 
> but wouldn't it be nice to keep the standard definition of
> induction, which says that the equations
> 
>     f(0) = c
>     f(n+1) = g(f(n))
> 
> determine a unique f, given c and g? this is the definition that
> most first year students study, and many programmers use every
> day, usually without the slightest idea about subalgebras, or
> triviality. do we really gain anything by replacing this simple
> scheme by the higher-order statement about all subalgebras? ok,
> we do get the appearance of symmetry between induction and
> coinduction. but is the higher-order statement about quotients
> really the friendliest form of coinduction, suitable to be taken
> as the definition?

I have always considered the statement regarding existence of a unique
function f to be the principle of recursion.  The principle of
induction is, in my mind, the weaker principle that says that if a
property holds of 0 and is closed under successor, then it holds of N.
I identify "properties" with subobjects here.  In terms of
homomorphisms, this is equivalent to the "uniqueness part" of
initiality (assuming that our category has some epi-mono factorization
property, I suppose).

One may question whether it's appropriate to take the principle of
induction to apply to all properties or only those properties which
are definable in some sense -- I guess you've done as much when you
refer to the "higher-order statement" above.  I'm not sure to what
extent my use of all subalgebras differs from your quantification over
all g and c in the statement of recursion above, however.

When we separate the properties of induction and recursion in the way
I mention above, by the way, then (conceptually) induction is a
property of those algebras A in which each element is picked out by a
(not necessarily unique) term of our language.  I.e., those A for
which there is an epi from the initial algebra to A.  Hence, induction
in my sense is weaker than recursion (induction in your sense).

Regarding whether a statement about quotients is really the
"friendliest form" of coinduction, I'll have to agree that it is not,
if one's aim is to describe coinduction as it is commonly used.  I
chose to describe it in the terms above because these terms make clear
the "co" in "coinduction".

I did gloss over details that ensure the equivalence of the "familiar
principle" of coinduction and the principle regarding quotients.  If
the base category has kernel pairs of coequalizers and G preserves
weak pullbacks, then a G-coalgebra A has no non-trivial quotients iff the
equality relation is the largest coalgebraic relation on A (I hope
I've got that right).  Inasmuch as the principle I called coinduction
is not always equivalent to the familiar principle, I was being a bit
misleading.  However, I think it is the right approach if one's goal
is to explain the relationship between induction and coinduction.

I'm not sure that the principle regarding the equality relation is
what you'd call coinduction, however.  I think that the principle you
have in mind is what I'd call "corecursion".  Analogous to the
distinction between recursion and induction, I omit the existence part
of finality in what I call "coinduction" and I use the uniqueness
part.
 
> 
> and then, moving from coalgebraic theories to comonads, like
> from algebraic theories to monads, should presumably yield an
> HSP-like characterization for the categories of (comonad)
> coalgebras. but coalgebras for a comonad over a topos usually
> form just another topos.
> 

But, there *are* such characterizations of categories of coalgebras,
if I understand your point.  Namely, a full subcategory of coalgebras
is closed under coproducts, codomains of epis and domains of regular
monos iff it is the class of coalgebras satisfying a "coequation" --
given, here, some boundedness conditions on the functor so that things
work out well.  For brevity's sake, and because you're likely familiar
with this result, I'll omit details on what a coequation is and how
it's satisfied.

By the way, about that last statement regarding coalgebras "usually"
forming a topos:  Doesn't one need that the endofunctor preserve
pullbacks, or am I missing something?

-- 
Jesse Hughes
"The only "intuitive" interface is the nipple.  After that, it's all
learned."                             -Bruce Ediger on X interfaces
"Nipples are intuitive, my ass!"      -Quincy Prescott Hughes



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: coinduction: definable equationally?
  2000-10-27 20:40 coinduction Jesse F. Hughes
@ 2000-10-28  9:23 ` Vaughan Pratt
  2000-10-28 20:54   ` Jesse F. Hughes
  2000-11-02 17:09   ` Luigi Santocanale
  2000-10-30  3:38 ` coinduction Dusko Pavlovic
  1 sibling, 2 replies; 14+ messages in thread
From: Vaughan Pratt @ 2000-10-28  9:23 UTC (permalink / raw)
  To: Categories@Mta. Ca


1.  Is there a variety embedding coinduction?

Jesse Hughes distinguishes induction from recursion in terms of natural
numbers.  For him induction is Peano's mathematical induction, while
recursion is the uniqueness of the map from the (unique up to a unique
isomorphism) initial algebra to an arbitrary algebra.

To me induction is on the one hand a slightly more general notion, yet on
the other sufficiently tractable that it can be axiomatized equationally
when suitably expressed, as I'll now describe.  In particular there
exists a variety embedding induction, meaning one in which an induction
principle holds.  Whether the additional generality makes induction
equal to recursion is another story.

The context in which all this arose, namely answers to Al Vilcius'
question about coinduction, naturally prompts the question, is there a
variety embedding coinduction, to which I'll return at the end.

2.  Nature of Induction

Suppose, somewhere out there in the universe, a world exists where
kicking a nonworking TV set always leaves it in its nonworking state.
Then in that world (certainly not ours), it is surely the case that
repeated kicking can do no better.

The general principle here is that if performing a certain action on
a system cannot under any circumstance destroy a certain property of
that system, then neither can _repeatedly_ performing that action on
that system.

I submit that this is the essence of induction, of which mathematical
induction is just an instance.

This principle may be expressed more logically as

        PQ  |- P
        -------            (IND)
        PQ* |- P

The premise is that Q preserves P.  That is, if first P and then
afterwards Q, it follows that P.

The conclusion is that Q* preserves P.  That is, if first P and then
afterwards Q repeatedly, it follows that P.  Here we understand Q*
as the reflexive transitive closure of Q, in the spirit of Kleene's
regular expressions, which we take to denote the performance of Q zero
or more times.

By itself (IND) serves only to bound Q* from above, which would allow
us to satisfy (IND) with Q* = 0 for all Q.  We may bound Q* from below
by adding that Q* is a closure of Q, namely Q <= Q*, and is transitive
--- Q*Q* <= Q* --- and reflexive --- 1 <= Q*.  These additional
conditions make Q* the least reflexive transitive element greater or
equal to Q (proved below), thereby uniquely determining it.

3.  Equational Expression of Induction

The earliest description to my knowledge of a finitely based variety
embedding an induction principle is Tarski and Ng's RAT [1] (1977).
(The second is DALG, dynamic algebras, which I described two years later
[2] (1979), independently of [1], which I did not learn about until 1989.)
The following, based on [3] (1990), generalizes the Boolean setting for
RAT to residuated semirings.  Others who have worked on finitely based
varieties embedding induction include Kozen, Bloom, and Esik, to name
just a few.

Take as the ambient variety that of semirings with unit.  The signature
is 2-2-0-0, notated (X, ., +, 0, 1).  The equations are:

1.  (X,+,0) is a semilattice with unit: + is associative (1), commutative
(2), and idempotent (3), with identity 0 (4).

2.  (X,.,1) is a monoid:  . is associative (5), with identity 1 (6,7).

3.  Distributivity:
        x.(y+z) = x.y + x.y              (8)
        (x+y).z = x.z + y.z              (9)

(Distributivity could be weakened to monotonicity for our purposes,
but since (RR) and (LR) below force it anyway we may as well put it in
here to justify calling this a semiring.)

We make the following abbreviations.

        xy for x.y
        x <= y for x+y = y (inequations are expressible equationally)
        x |- y for x <= y (i.e. for x+y = y), to look more logical

The induction principle IND may then be understood as being about such
a semiring expanded with a unary operation *.

Thus far the signature has constants 0 and 1, binary operations + and
., and a unary operation *, amounting to Kleene's language of regular
expressions.  The addition of two further binary operations, suitably
axiomatized equationally, permits a purely equational expression of
the foregoing.

The desired language expansion adds right and left residuation, x\y and
x/y (of which just the first suffices for our immediate purposes).

              PQ <= R iff Q <= P\R (right residuation)     (RR)
                      iff P <= R/Q (left residuation)      (LR)

These axioms, recognizable in this forum of course as making P\ and
/P the polarities of a Galois connection or posetal adjunction, serve
to define P\Q and Q/P uniquely in terms of the semiring operations.
In particular they force P\Q (resp. Q/P) to be the greatest, i.e. weakest,
element satisfying P(P\Q) <= Q (resp. (Q/P)P <= Q).

If we interpret PQ logically as a noncommutative conjunction P _then_ Q,
then P\Q, or P -> Q, may be understood logically as "if previously P, then R".
Similarly Q/P, or Q <- P, becomes "Q provided eventually P".  One way of
looking at this is that we are furnishing the signature with internal
conditionals in order to express the external conditionals in (IND),
(RR), and (LR) purely equationally.

(RR) and (LR) serve not merely to axiomatize residuation but to define
it, in the sense that for any given semiring there exists at most one
interpretation of right residuation satisfying (RR).  Hence (RR) serves
(a) to pick out those semirings having exactly one such interpretation,
and (b) to enforce that interpretation.  Similarly for (LR).

The nonequational definition of residuation given by (RR) and (LR)
has the following equipollent equational expression (exercise):

        P\(Q+R) = P\Q + P\R               (10)
        P(P\Q) <= Q <= P\(PQ)             (11)
        (P+Q)/R = P/R + Q/R               (12)
        (Q/P)P <= Q <= (QP)/P             (13)

(Actually (10) and (12) are overkill, it suffices to state that P\Q
and Q/P are monotone in Q, viz. P\Q <= P\(Q+R) and similarly for Q/P.
Also left residuation along with (LR), (12), and (13) are redundant for
the purposes of the present story.)

Three more equations complete the promised variety.

        1 + P*P* + P <= P*                (14)
        P* <= (P+Q)*                      (15)
        (P\P)* <= P\P                     (16) (induction, equationally) [1]

Equation (14) says that P* is a reflexive transitive element greater or
equal to P.

Equation (15) merely asserts the monotonicity of *.

Equation (16) is the equational expression of induction.  It is
equivalent to

        P(P\P)* <= P                      (16')

That is, the repeated action of P\P preserves P.

The interest in P\P is first of all that it preserves P, i.e.

        P(P\P) <= P

as an immediate consequence (by (RR)) of P\P <= P\P; and second that
it is the weakest (greatest) preserver of P, since if PQ <= P then
Q <= P\P (by (RR)).

This makes (16') the instance of induction in which the preserver of P
happens to be P\P.

But since P\P is the _weakest_ preserver of P, we can now obtain the
general (nonequational) induction principle, namely from PQ <= P infer
PQ* <= P, from this specific equationally expressible instance of it, viz.

        PQ  <= P
        Q   <= P\P          by (RR)
        Q*  <= (P\P)*       by (15) (monotonicity of *)
        PQ* <= P(P\P)*      by (9) (additivity of P. implies its monotonicity)
            <= P            by (16')

That is, the induction principle (IND) holds in the variety axiomatized by
(1)-(16).

More than this however, the unary operation * is uniquely determined
by (14)-(16) by virtue of P* being the least (hence unique) reflexive
transitive element of the semiring greater or equal to P.  To see this,
consider any reflexive transitive X.  By transitivity XX <= X so X <=
X\X, whence X* <= (X\X)* <= X\X.  Hence XX* <= X.  Concatenating X* on the
right of each side of 1 <= X yields X* <= XX* <= X.  Now suppose P <= X.
Then P* <= X* <= X, making P* the least reflexive transitive element >= P.

The residuals are of course uniquely determined by (10)-(13).  So any
semiring can be expanded with residuation and star so as to satisfy
(10)-(16) in at most one way.  That is, those seven equations not only
single out those semirings admitting such an expansion but serve to fully
define pre- and post-implication as well as reflexive transitive closure,
entirely in terms of the given semiring.

Given the well-known difficulties of defining the natural numbers using
Peano's mathematical induction, or by any other first order means, it is
a pleasant change to see an ostensibly intractable operation such as P*
pinned down exactly in this way, and even more pleasant that it is all
done purely equationally.

4.  Back to the question

We may now reword the original question in the light of the foregoing.

To what variety can one similarly associate suitable operations fully
defined equationally so as to include an equationally expressed principal
of coinduction?

--
Vaughan Pratt

[1]
        @Article(
NT77,   Author="Ng, K.C. and Tarski, A.",
        Title="Relation algebras with transitive closure,
                {A}bstract 742-02-09",
        Journal="Notices Amer. Math. Soc.", Volume=24, Pages="A29-A30",
        Year=1977)

[2]
	@InProceedings(
Pr79c,	Author="Pratt, V.R.",
	Title="Models of program logics",
	BookTitle="20th Symposium on foundations of Computer Science",
	Address="San Juan", Month=Oct, Year=1979)

[3]
        @InProceedings(
Pr90b,  Author="Pratt, V.R.",
        Title="Action Logic and Pure Induction",
        BookTitle="Logics in AI: European Workshop JELIA '90", Series=LNCS,
        Volume=478,
        Editor="J. van Eijck", Publisher="Springer-Verlag", Pages="97-120",
        Address="Amsterdam, NL", Month=Sep, Year=1990)



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: coinduction: definable equationally?
  2000-10-28  9:23 ` coinduction: definable equationally? Vaughan Pratt
@ 2000-10-28 20:54   ` Jesse F. Hughes
  2000-10-30  0:32     ` Andrej Bauer
  2000-11-02 17:09   ` Luigi Santocanale
  1 sibling, 1 reply; 14+ messages in thread
From: Jesse F. Hughes @ 2000-10-28 20:54 UTC (permalink / raw)
  To: Categories@Mta. Ca

Vaughan Pratt <pratt@cs.stanford.edu> writes:

> 1.  Is there a variety embedding coinduction?
> 
> Jesse Hughes distinguishes induction from recursion in terms of natural
> numbers.  For him induction is Peano's mathematical induction, while
> recursion is the uniqueness of the map from the (unique up to a unique
> isomorphism) initial algebra to an arbitrary algebra.
> 

This doesn't look like what I had in mind, exactly.  Peano's
mathematical induction was meant as an example of induction, not the
whole meaning of induction.  I stated induction in terms of natural
numbers only to continue Dusko's example.  I'm afraid that I was a bit
vague in what exactly induction is in my previous post.

I say that a G-algebra A satisfies the principle of induction just in
case every mono G-homomorphism into A is an isomorphism.  If our base
category has equalizers, then A satisfies the principle of induction
just in case, for every algebra B, there is at most one homomorphism 
A -> B.  So induction is the uniqueness part of initiality.

What's recursion, then?  As far as I'm concerned, recursion is just
initiality.  I.e., an algebra A satisfies the principle of definition
by recursion just in case it is the initial algebra.  (Why not take
recursion to be just the existence part?  I haven't seen much use for
just the existence part and it also doesn't fit well with conventional
usage.)  Clearly, on this view, the principle of recursion implies the
principle of induction, but the converse is not the case.

I hope that my idea of induction is a bit clearer.  I really must
learn to write more precisely.
-- 
Jesse Hughes
"Such behaviour is exclusively confined to functions invented by
mathematicians for the sake of causing trouble."
 -Albert Eagle's _A Practical Treatise on Fourier's Theorem_



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: coinduction: definable equationally?
  2000-10-28 20:54   ` Jesse F. Hughes
@ 2000-10-30  0:32     ` Andrej Bauer
  0 siblings, 0 replies; 14+ messages in thread
From: Andrej Bauer @ 2000-10-30  0:32 UTC (permalink / raw)
  To: Categories@Mta. Ca


I believe the original post asked what coalgebras might be good for.
Dusko Pavlovic already talked about examples of coinduction in
analysis. Let me mention two examples from constructive mathematics.
They are both examples of final coalgebras for polynomial functors.

A polynomial functor P_f with signature f: B --> A is a functor of the
form

   P_f(X) = \sum_{a \in A} X^{f^{*} a}

where \sum is a dependent sum, and f^{*} is the inverse image of f.
For the purposes of this post you can think of a polynomial functor as 
one of the form

   P(X) = C_0 + C_1 x X^{N_1} + ... + C_k x X^{N_k}

where C_0, ..., C_k and N_0, ..., N_k are (constant) objects.

Suppose we are in a constructive setting (a variant of Martin-Lof type
theory, effective topos, a PER model, ...) in which every polynomial
functor has a final coalgebra.



Example 1:

In constructive mathematics _spreads_ and _fans_ play an important
role. They are examples of final coalgebras. For example, a fan is a
finitely branching tree (can be infinite!) in which the nodes are
labeled with natural numbers. The space of all fans FAN satisfies the
identity

  FAN = N + N x FAN + N x FAN^2 + N x FAN^3 + ...
      = sum_{k \in N} N x FAN^k

it is the final coalgebra for the polynomial functor

  P(X) = \sum_{k \in N} N x FAN^k

I find this presentation conceptually cleaner than the usual encoding
of spreads as sets of Goedel codes of finite sequences of natural
numbers (of course, this presentation requires a richer type theory).
Also, this definition is easily adapted to fans and spreads labeled by
elements of any set A (just replace N x FAN^k with A x FAN^k), and of
any branching type.

By the way, FAN is isomorphic to N^N, and N^N is the final coalgebra
for the functor P(X) = N x X.


Example 2:

The initial algebra for P(X) = 1 + X is the set of natural numbers.

The final coalgebra N^+ for this functor is best thought of as the
one-point compactification of the natural numbers, as it consists of
the natural numbers together with one extra point "at infinity". In
classical set theory, this is no different from natural numbers, but
in a constructive setting the point at infinity is not isolated. If
one wants to prove anything interesting about N^+, just about the only
way to do it is to use coinduction and corecursion. It's a good
exercise.

This is neat because we get a one-point compactification of N without
any reference to topology.

If we compute N^+ in PER(D), where D is some topological model of the
untyped lambda calculus, such as P(omega) or the universal Scott
domain, we get _precisely_ the one-point compactification of natural
numbers.

In the effective topos N^+ can be described as an equivalence relation
on partial recursive functions, where two such functions are
equivalent if, and only if, they terminate in the same number of steps
(when applied to some fixed input) or they diverge. The divergent
functions represent the point at infinity, and those terminating in
exactly k steps represent the number k.


Andrej



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: coinduction
  2000-10-27 20:40 coinduction Jesse F. Hughes
  2000-10-28  9:23 ` coinduction: definable equationally? Vaughan Pratt
@ 2000-10-30  3:38 ` Dusko Pavlovic
  1 sibling, 0 replies; 14+ messages in thread
From: Dusko Pavlovic @ 2000-10-30  3:38 UTC (permalink / raw)
  To: Categories@Mta. Ca

> > but wouldn't it be nice to keep the standard definition of
> > induction, which says that the equations
> >
> >     f(0) = c
> >     f(n+1) = g(f(n))
> >
> > determine a unique f, given c and g? this is the definition that most
> first year

> I have always considered the statement regarding existence of a unique

> function f to be the principle of recursion.

the recursion schema, as defined by godel in 1930, is a bit  stronger: g is
allowed to depend on n, not only on f(n).

> The principle of
> induction is, in my mind, the weaker principle that says that if a
> property holds of 0 and is closed under successor, then it holds of N.

in a topos, this principle is equivalent with the above schema. and the
schema itself, of course, just means that [0,s] : 1+N --> N is initial: any
algebra [c,g] : 1+X --> X induces a unique homomorphism f : N-->X. using the
exponents, we do recursion.

as i said last time, induction and recursion have been in heavy use for so
long, that they really don't leave much space for redefining, even for the
sake of symmetry with coinduction and corecursion.

-- dusko




^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: coinduction: definable equationally?
  2000-10-28  9:23 ` coinduction: definable equationally? Vaughan Pratt
  2000-10-28 20:54   ` Jesse F. Hughes
@ 2000-11-02 17:09   ` Luigi Santocanale
  1 sibling, 0 replies; 14+ messages in thread
From: Luigi Santocanale @ 2000-11-02 17:09 UTC (permalink / raw)
  To: Categories@Mta. Ca

Vaughan Pratt wrote:

> We may now reword the original question in the light of the foregoing.
> To what variety can one similarly associate suitable operations fully
> defined equationally so as to include an equationally expressed principal
> of coinduction?

In the same way an induction principle is expressible by means of a
least fixed point, one could expect coinductive principles to be
expressible by means of greatest fixed points.

The following example will clarify what I have in mind.
Let G,H two graphs, then we can construct the graph G*H, where 

(G*H)_0 = G_0 x H_0 

and 

(G*H)_1 = (G_1 x H_0) + (G_0 x H_1) . 

Every transition of  G*H is labeled: it is either a left transition or a
right transition. We obtain in this way two modal operators on the
subsets of G_0 x H_0, call them <l> and <r>. A bisimulation is a subset
B of G_0 x H_0 such that

B <= [r]<l>B and B <= [l]<r> , 

where the [] are the duals of <>. If G = H, then saying that G has no
proper quotient amounts to saying that the greatest bisimulation is
equal to the diagonal D. Therefore a principle of coinduction can be
expressed as follows:

\nu_z.([r]<l>z \wedge [l]<r>z) = D 

where \nu_z.f(z) is the greatest fixed point of f(z). The example can be
generalized. 

The following question arises: is the greatest postfixed point definable
equationally? The answer is yes, at least in several cases, as it is the
least prefixed point.

Consider a theory which contains the signature <0,+,.,\>. <0,+> satisfy
the semilattice axioms and, with respect to induced order, a.x is left
adjoint to a\x, for a fixed a. Let f(z) be any term of the theory, then
the following holds:

g(x) is the parameterized least prefixed point of f(z).x if and only if
the equations

f(g(x)).x = g(x)
g(x) <= g(x+y)
g(f(x)\x) <= x

hold. 

It is an easy exercise to check this is true, using the fact that
f(x).(f(x)\x) <= x . If . has a right unit 1, then g(1) is the least
prefixed point of f(z). Similar results hold for the greatest postfixed
point if we can find an operation . with a parameterized right adjoint. 

Algebraic models of the propositional $\mu$-calculus form a variety, as
the models of PDL do, and hopefully coinductive principles can be
expressed. One could also define something like a $\mu$-linear logic,
its models would form a variety. 

Best regards,

	Luigi

-- 
Luigi Santocanale,

BRICS				
Department of Computer Science	Telephone:   +45 8942 3288
University of Aarhus		Telefax:     +45 8942 3255
Ny Munkegade, building 540	http://www.brics.dk/~luigis/
DK - 8000 Århus C, Denmark.	mailto:luigis@brics.dk



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: coinduction
@ 2000-10-30 23:09 Jesse F. Hughes
  0 siblings, 0 replies; 14+ messages in thread
From: Jesse F. Hughes @ 2000-10-30 23:09 UTC (permalink / raw)
  To: Categories@Mta. Ca

Dusko Pavlovic <dusko@kestrel.edu> writes:

> 
> > I have always considered the statement regarding existence of a
> > unique function f to be the principle of recursion.
> 
> the recursion schema, as defined by godel in 1930, is a bit
> stronger: g is allowed to depend on n, not only on f(n).

Yes, I overlooked that point. But, is that principle any stronger than
the principle you called induction?  I think it isn't, given products
in our category.  

Let N be the initial successor algebra.  Let c be in X and 
g:N x X -> X. To define a function f such that
f(0) = c
f(n+1) = g(n,f(n)),
I just have to cook up the appropriate structure map for N x X.  I
think that
<s o p_1,g> + <0,c>: (N x X) + 1 -> N x X
does it (p_1 is the projection map).  Let <k,f>: N -> N x X be the
unique function guaranteed by initiality.  One can show that f
satisfies the equations above.  (Along the way, one shows that k is
the identity.)

(To add other parameters, one needs exponentials.  For instance,
given
h:A -> X
g:A x N x X -> X
to show that there is a unique f such that
f(a,0) = h(a)
f(a,n+1) = g(a,n,f(a,n))
requires a structure map for (X x N)^A, if I'm not mistaken.)

This shows that Godel's principle of recursion is equivalent to the
statement "N is an initial algebra".  I.e., it is equivalent to the
statement that there is a unique f such that
f(0) = c
f(n+1) = g(f(n)).

> > The principle of
> > induction is, in my mind, the weaker principle that says that if a
> > property holds of 0 and is closed under successor, then it holds of N.
> 
> in a topos, this principle is equivalent with the above schema. and
> the schema itself, of course, just means that [0,s] : 1+N --> N is
> initial: any algebra [c,g] : 1+X --> X induces a unique homomorphism
> f : N-->X. using the exponents, we do recursion.

I am confused by this statement.  My principle of induction for
successor algebras [0_X,s_X]:1+X -> X is this: For all subsets P of X,
if 0_X is in P and P is closed under s_X, then P = X.  (Categorically,
an algebra satisfies the principle of induction if it contains no
proper subalgebras.)

Consider the successor algebra X={x}, with the evident structure map 
[0_X,s_X]:1+X -> X.  
Surely, for every subset P of X, if 0_X is in P and P is closed under
s_X, then P = X.  In other words, the algebra X satisfies the principle
of induction in the sense that I gave.  This seems to contradict the
equivalence you mention.  What am I overlooking?

> as i said last time, induction and recursion have been in heavy use
> for so long, that they really don't leave much space for redefining,
> even for the sake of symmetry with coinduction and corecursion.

I quite agree, but I didn't redefine induction, as far as I am
concerned.  I learned that induction was the proof principle as I
stated it and that recursion is the principle regarding the definition
of functions many years ago.  I think that our disagreement in
definitions is a disagreement that one will find elsewhere in the
literature.  I also suppose that the term coinduction (as I use it)
arose from the definition of induction that I offered.

Are others unfamiliar with the distinction between induction and
recursion as I drew it?  I assumed it was a more or less conventional
distinction.  If my use is really idiosyncratic, then I'd be happy to
drop it.

-- 
Jesse Hughes
"Such behaviour is exclusively confined to functions invented by
mathematicians for the sake of causing trouble."
 -Albert Eagle's _A Practical Treatise on Fourier's Theorem_



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: coinduction
  2000-10-20 12:57 coinduction Al Vilcius
                   ` (3 preceding siblings ...)
  2000-10-26 23:49 ` coinduction Dusko Pavlovic
@ 2000-10-30 15:55 ` Jeremy Gibbons
  4 siblings, 0 replies; 14+ messages in thread
From: Jeremy Gibbons @ 2000-10-30 15:55 UTC (permalink / raw)
  To: categories

> can anyone explain coinduction?
> in what sense it dual to induction?

We've heard some very erudite answers to these questions. I
don't know the questioner's background, but here is a
different answer, in case it helps.

In the algebraic approach to functional programming, an
initial datatype of a functor F is a datatype T and an
operation in : F T -> T for which the equation in h

  h . in = f . F h

has a unique solution for given f. We write "fold_F f" for
that unique solution, so

  h = fold_F f  <=>  h . in = f . F h

This is an inductive definition of fold, and the above
equivalence (the "universal property for fold") encapsulates
proof by structural induction.

(For example, when F X = 1+X, then T is N, the naturals. The
injection in : 1+N->N is the coproduct morphism [0,1+].
Folds on naturals are functions of the form

  fold_{1+} [z,s] 0     = z
  fold_{1+} [z,s] (1+n) = s (fold_{1+} [z,s] n)

To prove that a predicate p holds for every natural n is
equivalent to proving that the function p : N -> Bool is
equal to the function alwaystrue : N -> Bool that always
returns true. Now alwaystrue is a fold,

  alwaystrue = fold_{1+} [true,step]
    where step true = true

Therefore, to prove that p holds for every natural, we can
use the universal property:

      predicate p holds of every natural
  <=>
      p = alwaystrue
  <=>
      p = fold_{1+} [true,step]
  <=>
      p . [0,1+] = [true,step] . id+p
  <=>
      p(0) = true  and  p(1+n) = step(p(n))
  <=>
      p(0) = true  and  (p(1+n) = true when p(n) = true)

which is the usual principle of mathematical induction.)

Dually, a final datatype of a functor F is a datatype T and
an operation out : T -> F T for which the equation in h

  out . h = F h . f

has a unique solution for given f. We write "unfold_F f" for
that unique solution, so

  h = unfold_F f  <=>  out . h = F h . f

This is a coinductive definition of unfold, and the above
equivalence (the "universal property for unfold")
encapsulates proof by structural coinduction.

> how is it used to prove theorems?

Exactly the same way. For example, the datatype Stream A of
streams of A's is the final datatype of the functor taking X
to A x X. An example of an unfold for this type is the
function iterate, for which

  iterate f x = [ x, f x, f (f x), f (f (f x)), ... ]

defined by

  iterate f = unfold_{A x} <id,f>

One might expect that

  iterate f . f = Stream f . iterate f

and the proof of this fact is a straightforward application
of the universal property of unfold (that is, a proof by
coinduction).

Jeremy

-- 
Jeremy.Gibbons@comlab.ox.ac.uk
  Oxford University Computing Laboratory,    TEL: +44 1865 283508
  Wolfson Building, Parks Road,              FAX: +44 1865 273839
  Oxford OX1 3QD, UK.  
  URL: http://www.comlab.ox.ac.uk/oucl/people/jeremy.gibbons.html



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: coinduction
@ 2000-10-28 19:39 Mamuka Jibladze
  0 siblings, 0 replies; 14+ messages in thread
From: Mamuka Jibladze @ 2000-10-28 19:39 UTC (permalink / raw)
  To: categories

Hi all,

although I cannot say much on coinduction itself, I would like to mention
one possible point of view on "universal coalgebra" commented on by Dusko at
the end of his message:

> PS i have a great appreciation for jan rutten's work on
> coinduction, but it seems to me that the title of *universal
> coalgebra* is a bit exaggerated. universal algebra does not
> dualize. by design, it is meant to be model theory of algebraic
> theories. what would be the "coalgebraic theories", leading to
> coalgebras as their models? what would their functorial
> semantics look like?  (implicitly, these questions are already
> in manes' book.)
>
> and then, moving from coalgebraic theories to comonads, like
> from algebraic theories to monads, should presumably yield an
> HSP-like characterization for the categories of (comonad)
> coalgebras. but coalgebras for a comonad over a topos usually
> form just another topos.
>
> i may be missing something essential here, but then i do stand
> to be corrected.

I would say that, although there is no straightforward dualization, there
might be a "hidden" one. In my opinion, comonads can sometimes play the role
of "strengtheners" or "enrichers" for monads, so that while monads can be
viewed as a "substrate for structure", comonads are "substrate for
semantics".

Rather than trying to give sense to this vague phrase, let me give three
illustrations.

-1-
S. Kobayashi,
Monad as modality.
Theoret. Comput. Sci. 175 (1997), no. 1, 29--74.

The starting point in that paper is the monad semantics by Moggi.
It seems that for computer-scientific purposes strength of the monad is too
restrictive a requirement, and the author modifies the semantics by assuming
strength of the monad *only*with* respect*to*a*comonad*. This means that the
monad and comonad distribute in such a way that the monad descends to a
strong monad on the category of coalgebras over the comonad. Under the
Curry-Howard correspondence this yields an interesting intuitionistic
version of the modal system S4. A realizability interpretation is also
given.

-2-
Freyd, Yetter, Lawvere, Kock, Reyes and others have studied "atoms", or
infinitesimal objects - objects I in a cartesian closed category with the
property that the exponentiation functor I->(_) has a right adjoint. This
induces a monad, which cannot be strong in a nontrivial way, just as in the
previous example. There is a "largest possible" subcategory over which it is
strong, namely the category of I-discrete objects, i.e. those objects X for
which the adjunction unit from X to I->X is iso. In good cases the inclusion
of this subcategory is comonadic. As Bill pointed out, one might view
Cantor's idea of set theory as a case of this: the universe of sets is the
largest - necessarily "discrete" - part of "analysis" over which the latter
can be "enriched" (well, this is awfully inaccurate, sorry -- consider it as
a metaphor).

-3-
In differential homotopical algebra, there was developed a notion of torsor
(principal homogeneous object) which works in a non-cartesian monoidal
category. This notion requires not just a (right) action of a monoid M on an
object P, but also a "cartesianizer" in form of a left coaction of a
comonoid C on P which commutes with the action, and is principal in an
appropriate sense. It turns out that the monoid and the comonoid enter in an
entirely symmetric way. In the monoidal category of differential graded
modules over a commutative ring, principal C-M-objects are classified by
s.c. twisting cochains from C to M. Moreover in that category, for a monoid
M there is a universal choice of P and C given by the Eilenberg-Mac Lane bar
construction, and for a comonoid C there is a universal choice of P and M,
given by the Adams' cobar construction. I would be grateful for a reference
to a version of these for general monads/comonads.

Regards,
Mamuka




^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: coinduction
  2000-10-20 12:57 coinduction Al Vilcius
                   ` (2 preceding siblings ...)
  2000-10-21  1:19 ` coinduction Charles Wells
@ 2000-10-26 23:49 ` Dusko Pavlovic
  2000-10-30 15:55 ` coinduction Jeremy Gibbons
  4 siblings, 0 replies; 14+ messages in thread
From: Dusko Pavlovic @ 2000-10-26 23:49 UTC (permalink / raw)
  To: Categories@Mta. Ca

Al Vilcius asks:

> > can anyone explain coinduction?
> > in what sense it dual to induction?

and most people reply along the lines of:

> The principle of induction says that initial algebras have no
> non-trivial subalgebras.

> Coinduction says, dually, that the final coalgebra has no
non-trivial
> quotients.

but wouldn't it be nice to keep the standard definition of
induction, which says that the equations

    f(0) = c
    f(n+1) = g(f(n))

determine a unique f, given c and g? this is the definition that
most first year students study, and many programmers use every
day, usually without the slightest idea about subalgebras, or
triviality. do we really gain anything by replacing this simple
scheme by the higher-order statement about all subalgebras? ok,
we do get the appearance of symmetry between induction and
coinduction. but is the higher-order statement about quotients
really the friendliest form of coinduction, suitable to be taken
as the definition?

i think barwise and moss somewhere describe coinduction as
*induction without the base case*. the coinduction principle
says that the equation

    X = a.X + bc.X

determines a unique CCS-process, or a unique {a,b,c}-labelled
hyperset. it also guarantees that the integral equation

    E(x) = 1 + \int_0^x E dt

has a unique solution. (note that this is equivalent to the
differential equation

    E(0) = 1
    E = E'

where the initial condition gives the base case for the power
series solution. so the no-base-case slogan is just a matter of
form? but note that  E' = E, and more generally E' = h(E), for
an analytic h, identifies two *infinite* streams, and is not a
mere inductive step up the stream.)

in any case, i contend that the practice of coinduction
primarily consists of solving *guarded equations*, like the two
above --- surely if you count the areas where people extensively
use coinduction, without calling it by name, viz math analysis.

((it is true that computer scientists became aware of
coinduction while working out the bisimulations etc, which is
why the quotients came in light. but now it seems that the
bisimulations were just the way to construct final coalgebras;
whereas the coinduction, in analogy with the induction, should
be a practically useful logical principle, available over the
*given* final coalgebras.))

anyway, the heart of the matter probably lies in peter aczel's
insight that the statement

     * V ---> PV is a final coalgebra for P = (finite) powerset
functor

equivalently means that

    * every accessible pointed graph has a unique decoration in
V.

presenting the graph definitions as systems of guarded equations
(like in barwise and moss' book "vicious circles"), the above
equivalence generalizes to the statement that

    * V --d--> FV is a final coalgebra for F

iff

    * every system of guarded equations over V has a unique
solution.

where the notion of guarded is determined by F and d. this is,
of course, just barwise and moss' solution lemma, lifted to F.
or more succinctly : **V is a final fixpoint iff each guarded
operation g:V-->V has a unique fixpoint**

this, of course, supports the view that ***coinduction is
solving guarded equations***...

i was hoping to say more, but will have to leave it for later,
and post this hoping to motivate discussion.

unguardedly yours,
-- dusko pavlovic

PS i have a great appreciation for jan rutten's work on
coinduction, but it seems to me that the title of *universal
coalgebra* is a bit exaggerated. universal algebra does not
dualize. by design, it is meant to be model theory of algebraic
theories. what would be the "coalgebraic theories", leading to
coalgebras as their models? what would their functorial
semantics look like?  (implicitly, these questions are already
in manes' book.)

and then, moving from coalgebraic theories to comonads, like
from algebraic theories to monads, should presumably yield an
HSP-like characterization for the categories of (comonad)
coalgebras. but coalgebras for a comonad over a topos usually
form just another topos.

i may be missing something essential here, but then i do stand
to be corrected.




^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: coinduction
  2000-10-20 12:57 coinduction Al Vilcius
  2000-10-20 15:35 ` coinduction Hendrik Tews
  2000-10-20 22:01 ` coinduction Jesse F. Hughes
@ 2000-10-21  1:19 ` Charles Wells
  2000-10-26 23:49 ` coinduction Dusko Pavlovic
  2000-10-30 15:55 ` coinduction Jeremy Gibbons
  4 siblings, 0 replies; 14+ messages in thread
From: Charles Wells @ 2000-10-21  1:19 UTC (permalink / raw)
  To: categories

In brief:  If a structure has no nontrivial substructures, you can prove a
property P is true of everything in the structure by proving that the elements
with property P form a nonempty substructure.  Take the structure to be N with
the unary operation of successor and you get induction.  Now say that in the
opposite category:  If a structure has no nontrivial congruences, you can prove
that two objects are equal if they are equivalent under any congruence.  (You
can define "definition by coinduction" by a similar dualization.)  This has
found many applications in computer science (where the congruence is
bisimulation).  See J.J.M.M. Rutten, Universal coalgebra: a theory of systems.
Theoretical Computer Science 249(1), 2000, pp. 3-80 and other papers by him on
his website:  http://www.cwi.nl/~janr/papers/ 

--Charles Wells

>can anyone explain coinduction?
>in what sense it dual to induction?
>does it relate to the basic picture for a NNO?
>is there a (meaningful) concept of co-recursion?
>what would be the appropriate internal language?
>how is it used to prove theorems?
>
>Al Vilcius
>al.r@vilcius.com
>
>
>



Charles Wells, 105 South Cedar St., Oberlin, Ohio 44074, USA.
email: charles@freude.com. 
home phone: 440 774 1926.  
professional website: http://www.cwru.edu/artsci/math/wells/home.html
personal website: http://www.oberlin.net/~cwells/index.html
NE Ohio Sacred Harp website: http://www.oberlin.net/~cwells/sh.htm




^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: coinduction
  2000-10-20 12:57 coinduction Al Vilcius
  2000-10-20 15:35 ` coinduction Hendrik Tews
@ 2000-10-20 22:01 ` Jesse F. Hughes
  2000-10-21  1:19 ` coinduction Charles Wells
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 14+ messages in thread
From: Jesse F. Hughes @ 2000-10-20 22:01 UTC (permalink / raw)
  To: Categories@Mta. Ca

"Al Vilcius" <al.r@VILCIUS.com> writes:

> can anyone explain coinduction?
> in what sense it dual to induction?
> does it relate to the basic picture for a NNO?
> is there a (meaningful) concept of co-recursion?
> what would be the appropriate internal language?
> how is it used to prove theorems?

The principle of induction says that initial algebras have no
non-trivial subalgebras.  Consider N, the initial successor algebra.
A subobject S of N is a subalgebra just in case there is a structure
map
s: S+1 -> S
such that the inclusion S -> N is a homomorphism.  This is just the
usual condition that 0 is in S and if n is in S, then so is its
successor.  Induction says that any such subalgebra exhausts the set
N.

Coinduction says, dually, that the final coalgebra has no non-trivial
quotients.  (To see that this is "really" a dual principle, we would
interpret "subalgebra" above as a regular subobject in the category of
algebras.)  However, the principle is usually stated in terms of
relations on a coalgebra, rather than quotients.  In these terms, the
principle of coinduction says: Any relation on <F,f> in the category
of coalgebras is a subrelation of equality.

Note: I stated the principles of induction/coinduction above for the
initial algebra/final coalgebra, resp.  However, just as induction
holds for any quotient of the initial algebra, coinduction holds for
any (regular) subcoalgebra of the final coalgebra.

I'm not sure what you're looking for in relation to NNOs.  An NNO is
the same as an initial algebra for the successor functor (assuming
that our category has coproducts).  The structure map N + 1 -> N is
given by [s,0], where s is the successor.  The final coalgebra for
successor is easily described (say, in Set):  It is the NNO with a
point adjoined for "infinity".  The structure map pred:F -> F + 1 fixes
infinity, takes n+1 to n and takes 0 to * (the element in 1 -- a kind
of "error condition").

As far as co-recursion goes, yes, there is a meaningful principle.
Namely, given any coalgebra <A,a>, there is a unique coalgebra
homomorphism from <A,a> to <F,f>.  Thus, for any set S together with a
function t:S -> S+1, there is a unique map h:S -> N u {infinity} such
that 
(1) if t(s) = *, then h(s) = 0
(2) else, h(t(s)) = h(s) - 1 (where infinity - 1 = infinity)
In other words, h here takes an element s to the least n such that
t^n(s) = *, if defined.  If, for all n, t^n(s) != *, then h(s) =
infinity.

I'll defer on the question of what the appropriate internal language
is.

As far as how coinduction is used to prove theorems: To show two
elements of the final coalgebra are equal, it suffices to find a
coalgebraic relation relating the elements.

Hendrik's suggestion (that you find Jacobs and Rutten's paper) is a
good one, but I hope these comments help until then.

-- 
Jesse Hughes
"Such behaviour is exclusively confined to functions invented by
mathematicians for the sake of causing trouble."
 -Albert Eagle's _A_Practical_Treatise_on_Fourier's_Theorem_



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: coinduction
  2000-10-20 12:57 coinduction Al Vilcius
@ 2000-10-20 15:35 ` Hendrik Tews
  2000-10-20 22:01 ` coinduction Jesse F. Hughes
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 14+ messages in thread
From: Hendrik Tews @ 2000-10-20 15:35 UTC (permalink / raw)
  To: categories

Hi,

Al Vilcius writes:
   From: "Al Vilcius" <al.r@VILCIUS.com>
   Date: Fri, 20 Oct 2000 08:57:20 -0400
   Subject: categories: coinduction
   
   can anyone explain coinduction?

Have a look at 

   Bart Jacobs and Jan Rutten
   A Tutorial on (Co)Algebras and (Co)Induction
   Bulletin of the EATCS, Vol. 62, pp. 222-259, 1996.   
   
it is available on the authors homepages, for instance
http://www.cs.kun.nl/~bart/PAPERS/JR.ps.Z

Bye,

Hendrik



^ permalink raw reply	[flat|nested] 14+ messages in thread

* coinduction
@ 2000-10-20 12:57 Al Vilcius
  2000-10-20 15:35 ` coinduction Hendrik Tews
                   ` (4 more replies)
  0 siblings, 5 replies; 14+ messages in thread
From: Al Vilcius @ 2000-10-20 12:57 UTC (permalink / raw)
  To: Categories@Mta. Ca

can anyone explain coinduction?
in what sense it dual to induction?
does it relate to the basic picture for a NNO?
is there a (meaningful) concept of co-recursion?
what would be the appropriate internal language?
how is it used to prove theorems?

Al Vilcius
al.r@vilcius.com





^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2000-11-02 17:09 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2000-10-27 20:40 coinduction Jesse F. Hughes
2000-10-28  9:23 ` coinduction: definable equationally? Vaughan Pratt
2000-10-28 20:54   ` Jesse F. Hughes
2000-10-30  0:32     ` Andrej Bauer
2000-11-02 17:09   ` Luigi Santocanale
2000-10-30  3:38 ` coinduction Dusko Pavlovic
  -- strict thread matches above, loose matches on Subject: below --
2000-10-30 23:09 coinduction Jesse F. Hughes
2000-10-28 19:39 coinduction Mamuka Jibladze
2000-10-20 12:57 coinduction Al Vilcius
2000-10-20 15:35 ` coinduction Hendrik Tews
2000-10-20 22:01 ` coinduction Jesse F. Hughes
2000-10-21  1:19 ` coinduction Charles Wells
2000-10-26 23:49 ` coinduction Dusko Pavlovic
2000-10-30 15:55 ` coinduction Jeremy Gibbons

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).