categories - Category Theory list
 help / color / mirror / Atom feed
* Re: coinduction
@ 2000-10-27 20:40 Jesse F. Hughes
  2000-10-28  9:23 ` coinduction: definable equationally? Vaughan Pratt
  2000-10-30  3:38 ` coinduction Dusko Pavlovic
  0 siblings, 2 replies; 6+ messages in thread
From: Jesse F. Hughes @ 2000-10-27 20:40 UTC (permalink / raw)
  To: Categories@Mta. Ca


I'm afraid that I've never quite understood the notion of guarded
equations, so I cannot comment on much of what you wrote below without
some more time and more thought.  Let me make a few introductory
comments and perhaps I can say more later.

Dusko Pavlovic <dusko@kestrel.edu> writes:

> Al Vilcius asks:
> 
> > > can anyone explain coinduction?
> > > in what sense it dual to induction?
> 
> and most people reply along the lines of:
> 
> > The principle of induction says that initial algebras have no
> > non-trivial subalgebras.
> 
> > Coinduction says, dually, that the final coalgebra has no
> non-trivial
> > quotients.
> 
> but wouldn't it be nice to keep the standard definition of
> induction, which says that the equations
> 
>     f(0) = c
>     f(n+1) = g(f(n))
> 
> determine a unique f, given c and g? this is the definition that
> most first year students study, and many programmers use every
> day, usually without the slightest idea about subalgebras, or
> triviality. do we really gain anything by replacing this simple
> scheme by the higher-order statement about all subalgebras? ok,
> we do get the appearance of symmetry between induction and
> coinduction. but is the higher-order statement about quotients
> really the friendliest form of coinduction, suitable to be taken
> as the definition?

I have always considered the statement regarding existence of a unique
function f to be the principle of recursion.  The principle of
induction is, in my mind, the weaker principle that says that if a
property holds of 0 and is closed under successor, then it holds of N.
I identify "properties" with subobjects here.  In terms of
homomorphisms, this is equivalent to the "uniqueness part" of
initiality (assuming that our category has some epi-mono factorization
property, I suppose).

One may question whether it's appropriate to take the principle of
induction to apply to all properties or only those properties which
are definable in some sense -- I guess you've done as much when you
refer to the "higher-order statement" above.  I'm not sure to what
extent my use of all subalgebras differs from your quantification over
all g and c in the statement of recursion above, however.

When we separate the properties of induction and recursion in the way
I mention above, by the way, then (conceptually) induction is a
property of those algebras A in which each element is picked out by a
(not necessarily unique) term of our language.  I.e., those A for
which there is an epi from the initial algebra to A.  Hence, induction
in my sense is weaker than recursion (induction in your sense).

Regarding whether a statement about quotients is really the
"friendliest form" of coinduction, I'll have to agree that it is not,
if one's aim is to describe coinduction as it is commonly used.  I
chose to describe it in the terms above because these terms make clear
the "co" in "coinduction".

I did gloss over details that ensure the equivalence of the "familiar
principle" of coinduction and the principle regarding quotients.  If
the base category has kernel pairs of coequalizers and G preserves
weak pullbacks, then a G-coalgebra A has no non-trivial quotients iff the
equality relation is the largest coalgebraic relation on A (I hope
I've got that right).  Inasmuch as the principle I called coinduction
is not always equivalent to the familiar principle, I was being a bit
misleading.  However, I think it is the right approach if one's goal
is to explain the relationship between induction and coinduction.

I'm not sure that the principle regarding the equality relation is
what you'd call coinduction, however.  I think that the principle you
have in mind is what I'd call "corecursion".  Analogous to the
distinction between recursion and induction, I omit the existence part
of finality in what I call "coinduction" and I use the uniqueness
part.
 
> 
> and then, moving from coalgebraic theories to comonads, like
> from algebraic theories to monads, should presumably yield an
> HSP-like characterization for the categories of (comonad)
> coalgebras. but coalgebras for a comonad over a topos usually
> form just another topos.
> 

But, there *are* such characterizations of categories of coalgebras,
if I understand your point.  Namely, a full subcategory of coalgebras
is closed under coproducts, codomains of epis and domains of regular
monos iff it is the class of coalgebras satisfying a "coequation" --
given, here, some boundedness conditions on the functor so that things
work out well.  For brevity's sake, and because you're likely familiar
with this result, I'll omit details on what a coequation is and how
it's satisfied.

By the way, about that last statement regarding coalgebras "usually"
forming a topos:  Doesn't one need that the endofunctor preserve
pullbacks, or am I missing something?

-- 
Jesse Hughes
"The only "intuitive" interface is the nipple.  After that, it's all
learned."                             -Bruce Ediger on X interfaces
"Nipples are intuitive, my ass!"      -Quincy Prescott Hughes



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: coinduction: definable equationally?
  2000-10-27 20:40 coinduction Jesse F. Hughes
@ 2000-10-28  9:23 ` Vaughan Pratt
  2000-10-28 20:54   ` Jesse F. Hughes
  2000-11-02 17:09   ` Luigi Santocanale
  2000-10-30  3:38 ` coinduction Dusko Pavlovic
  1 sibling, 2 replies; 6+ messages in thread
From: Vaughan Pratt @ 2000-10-28  9:23 UTC (permalink / raw)
  To: Categories@Mta. Ca


1.  Is there a variety embedding coinduction?

Jesse Hughes distinguishes induction from recursion in terms of natural
numbers.  For him induction is Peano's mathematical induction, while
recursion is the uniqueness of the map from the (unique up to a unique
isomorphism) initial algebra to an arbitrary algebra.

To me induction is on the one hand a slightly more general notion, yet on
the other sufficiently tractable that it can be axiomatized equationally
when suitably expressed, as I'll now describe.  In particular there
exists a variety embedding induction, meaning one in which an induction
principle holds.  Whether the additional generality makes induction
equal to recursion is another story.

The context in which all this arose, namely answers to Al Vilcius'
question about coinduction, naturally prompts the question, is there a
variety embedding coinduction, to which I'll return at the end.

2.  Nature of Induction

Suppose, somewhere out there in the universe, a world exists where
kicking a nonworking TV set always leaves it in its nonworking state.
Then in that world (certainly not ours), it is surely the case that
repeated kicking can do no better.

The general principle here is that if performing a certain action on
a system cannot under any circumstance destroy a certain property of
that system, then neither can _repeatedly_ performing that action on
that system.

I submit that this is the essence of induction, of which mathematical
induction is just an instance.

This principle may be expressed more logically as

        PQ  |- P
        -------            (IND)
        PQ* |- P

The premise is that Q preserves P.  That is, if first P and then
afterwards Q, it follows that P.

The conclusion is that Q* preserves P.  That is, if first P and then
afterwards Q repeatedly, it follows that P.  Here we understand Q*
as the reflexive transitive closure of Q, in the spirit of Kleene's
regular expressions, which we take to denote the performance of Q zero
or more times.

By itself (IND) serves only to bound Q* from above, which would allow
us to satisfy (IND) with Q* = 0 for all Q.  We may bound Q* from below
by adding that Q* is a closure of Q, namely Q <= Q*, and is transitive
--- Q*Q* <= Q* --- and reflexive --- 1 <= Q*.  These additional
conditions make Q* the least reflexive transitive element greater or
equal to Q (proved below), thereby uniquely determining it.

3.  Equational Expression of Induction

The earliest description to my knowledge of a finitely based variety
embedding an induction principle is Tarski and Ng's RAT [1] (1977).
(The second is DALG, dynamic algebras, which I described two years later
[2] (1979), independently of [1], which I did not learn about until 1989.)
The following, based on [3] (1990), generalizes the Boolean setting for
RAT to residuated semirings.  Others who have worked on finitely based
varieties embedding induction include Kozen, Bloom, and Esik, to name
just a few.

Take as the ambient variety that of semirings with unit.  The signature
is 2-2-0-0, notated (X, ., +, 0, 1).  The equations are:

1.  (X,+,0) is a semilattice with unit: + is associative (1), commutative
(2), and idempotent (3), with identity 0 (4).

2.  (X,.,1) is a monoid:  . is associative (5), with identity 1 (6,7).

3.  Distributivity:
        x.(y+z) = x.y + x.y              (8)
        (x+y).z = x.z + y.z              (9)

(Distributivity could be weakened to monotonicity for our purposes,
but since (RR) and (LR) below force it anyway we may as well put it in
here to justify calling this a semiring.)

We make the following abbreviations.

        xy for x.y
        x <= y for x+y = y (inequations are expressible equationally)
        x |- y for x <= y (i.e. for x+y = y), to look more logical

The induction principle IND may then be understood as being about such
a semiring expanded with a unary operation *.

Thus far the signature has constants 0 and 1, binary operations + and
., and a unary operation *, amounting to Kleene's language of regular
expressions.  The addition of two further binary operations, suitably
axiomatized equationally, permits a purely equational expression of
the foregoing.

The desired language expansion adds right and left residuation, x\y and
x/y (of which just the first suffices for our immediate purposes).

              PQ <= R iff Q <= P\R (right residuation)     (RR)
                      iff P <= R/Q (left residuation)      (LR)

These axioms, recognizable in this forum of course as making P\ and
/P the polarities of a Galois connection or posetal adjunction, serve
to define P\Q and Q/P uniquely in terms of the semiring operations.
In particular they force P\Q (resp. Q/P) to be the greatest, i.e. weakest,
element satisfying P(P\Q) <= Q (resp. (Q/P)P <= Q).

If we interpret PQ logically as a noncommutative conjunction P _then_ Q,
then P\Q, or P -> Q, may be understood logically as "if previously P, then R".
Similarly Q/P, or Q <- P, becomes "Q provided eventually P".  One way of
looking at this is that we are furnishing the signature with internal
conditionals in order to express the external conditionals in (IND),
(RR), and (LR) purely equationally.

(RR) and (LR) serve not merely to axiomatize residuation but to define
it, in the sense that for any given semiring there exists at most one
interpretation of right residuation satisfying (RR).  Hence (RR) serves
(a) to pick out those semirings having exactly one such interpretation,
and (b) to enforce that interpretation.  Similarly for (LR).

The nonequational definition of residuation given by (RR) and (LR)
has the following equipollent equational expression (exercise):

        P\(Q+R) = P\Q + P\R               (10)
        P(P\Q) <= Q <= P\(PQ)             (11)
        (P+Q)/R = P/R + Q/R               (12)
        (Q/P)P <= Q <= (QP)/P             (13)

(Actually (10) and (12) are overkill, it suffices to state that P\Q
and Q/P are monotone in Q, viz. P\Q <= P\(Q+R) and similarly for Q/P.
Also left residuation along with (LR), (12), and (13) are redundant for
the purposes of the present story.)

Three more equations complete the promised variety.

        1 + P*P* + P <= P*                (14)
        P* <= (P+Q)*                      (15)
        (P\P)* <= P\P                     (16) (induction, equationally) [1]

Equation (14) says that P* is a reflexive transitive element greater or
equal to P.

Equation (15) merely asserts the monotonicity of *.

Equation (16) is the equational expression of induction.  It is
equivalent to

        P(P\P)* <= P                      (16')

That is, the repeated action of P\P preserves P.

The interest in P\P is first of all that it preserves P, i.e.

        P(P\P) <= P

as an immediate consequence (by (RR)) of P\P <= P\P; and second that
it is the weakest (greatest) preserver of P, since if PQ <= P then
Q <= P\P (by (RR)).

This makes (16') the instance of induction in which the preserver of P
happens to be P\P.

But since P\P is the _weakest_ preserver of P, we can now obtain the
general (nonequational) induction principle, namely from PQ <= P infer
PQ* <= P, from this specific equationally expressible instance of it, viz.

        PQ  <= P
        Q   <= P\P          by (RR)
        Q*  <= (P\P)*       by (15) (monotonicity of *)
        PQ* <= P(P\P)*      by (9) (additivity of P. implies its monotonicity)
            <= P            by (16')

That is, the induction principle (IND) holds in the variety axiomatized by
(1)-(16).

More than this however, the unary operation * is uniquely determined
by (14)-(16) by virtue of P* being the least (hence unique) reflexive
transitive element of the semiring greater or equal to P.  To see this,
consider any reflexive transitive X.  By transitivity XX <= X so X <=
X\X, whence X* <= (X\X)* <= X\X.  Hence XX* <= X.  Concatenating X* on the
right of each side of 1 <= X yields X* <= XX* <= X.  Now suppose P <= X.
Then P* <= X* <= X, making P* the least reflexive transitive element >= P.

The residuals are of course uniquely determined by (10)-(13).  So any
semiring can be expanded with residuation and star so as to satisfy
(10)-(16) in at most one way.  That is, those seven equations not only
single out those semirings admitting such an expansion but serve to fully
define pre- and post-implication as well as reflexive transitive closure,
entirely in terms of the given semiring.

Given the well-known difficulties of defining the natural numbers using
Peano's mathematical induction, or by any other first order means, it is
a pleasant change to see an ostensibly intractable operation such as P*
pinned down exactly in this way, and even more pleasant that it is all
done purely equationally.

4.  Back to the question

We may now reword the original question in the light of the foregoing.

To what variety can one similarly associate suitable operations fully
defined equationally so as to include an equationally expressed principal
of coinduction?

--
Vaughan Pratt

[1]
        @Article(
NT77,   Author="Ng, K.C. and Tarski, A.",
        Title="Relation algebras with transitive closure,
                {A}bstract 742-02-09",
        Journal="Notices Amer. Math. Soc.", Volume=24, Pages="A29-A30",
        Year=1977)

[2]
	@InProceedings(
Pr79c,	Author="Pratt, V.R.",
	Title="Models of program logics",
	BookTitle="20th Symposium on foundations of Computer Science",
	Address="San Juan", Month=Oct, Year=1979)

[3]
        @InProceedings(
Pr90b,  Author="Pratt, V.R.",
        Title="Action Logic and Pure Induction",
        BookTitle="Logics in AI: European Workshop JELIA '90", Series=LNCS,
        Volume=478,
        Editor="J. van Eijck", Publisher="Springer-Verlag", Pages="97-120",
        Address="Amsterdam, NL", Month=Sep, Year=1990)



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: coinduction: definable equationally?
  2000-10-28  9:23 ` coinduction: definable equationally? Vaughan Pratt
@ 2000-10-28 20:54   ` Jesse F. Hughes
  2000-10-30  0:32     ` Andrej Bauer
  2000-11-02 17:09   ` Luigi Santocanale
  1 sibling, 1 reply; 6+ messages in thread
From: Jesse F. Hughes @ 2000-10-28 20:54 UTC (permalink / raw)
  To: Categories@Mta. Ca

Vaughan Pratt <pratt@cs.stanford.edu> writes:

> 1.  Is there a variety embedding coinduction?
> 
> Jesse Hughes distinguishes induction from recursion in terms of natural
> numbers.  For him induction is Peano's mathematical induction, while
> recursion is the uniqueness of the map from the (unique up to a unique
> isomorphism) initial algebra to an arbitrary algebra.
> 

This doesn't look like what I had in mind, exactly.  Peano's
mathematical induction was meant as an example of induction, not the
whole meaning of induction.  I stated induction in terms of natural
numbers only to continue Dusko's example.  I'm afraid that I was a bit
vague in what exactly induction is in my previous post.

I say that a G-algebra A satisfies the principle of induction just in
case every mono G-homomorphism into A is an isomorphism.  If our base
category has equalizers, then A satisfies the principle of induction
just in case, for every algebra B, there is at most one homomorphism 
A -> B.  So induction is the uniqueness part of initiality.

What's recursion, then?  As far as I'm concerned, recursion is just
initiality.  I.e., an algebra A satisfies the principle of definition
by recursion just in case it is the initial algebra.  (Why not take
recursion to be just the existence part?  I haven't seen much use for
just the existence part and it also doesn't fit well with conventional
usage.)  Clearly, on this view, the principle of recursion implies the
principle of induction, but the converse is not the case.

I hope that my idea of induction is a bit clearer.  I really must
learn to write more precisely.
-- 
Jesse Hughes
"Such behaviour is exclusively confined to functions invented by
mathematicians for the sake of causing trouble."
 -Albert Eagle's _A Practical Treatise on Fourier's Theorem_



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: coinduction: definable equationally?
  2000-10-28 20:54   ` Jesse F. Hughes
@ 2000-10-30  0:32     ` Andrej Bauer
  0 siblings, 0 replies; 6+ messages in thread
From: Andrej Bauer @ 2000-10-30  0:32 UTC (permalink / raw)
  To: Categories@Mta. Ca


I believe the original post asked what coalgebras might be good for.
Dusko Pavlovic already talked about examples of coinduction in
analysis. Let me mention two examples from constructive mathematics.
They are both examples of final coalgebras for polynomial functors.

A polynomial functor P_f with signature f: B --> A is a functor of the
form

   P_f(X) = \sum_{a \in A} X^{f^{*} a}

where \sum is a dependent sum, and f^{*} is the inverse image of f.
For the purposes of this post you can think of a polynomial functor as 
one of the form

   P(X) = C_0 + C_1 x X^{N_1} + ... + C_k x X^{N_k}

where C_0, ..., C_k and N_0, ..., N_k are (constant) objects.

Suppose we are in a constructive setting (a variant of Martin-Lof type
theory, effective topos, a PER model, ...) in which every polynomial
functor has a final coalgebra.



Example 1:

In constructive mathematics _spreads_ and _fans_ play an important
role. They are examples of final coalgebras. For example, a fan is a
finitely branching tree (can be infinite!) in which the nodes are
labeled with natural numbers. The space of all fans FAN satisfies the
identity

  FAN = N + N x FAN + N x FAN^2 + N x FAN^3 + ...
      = sum_{k \in N} N x FAN^k

it is the final coalgebra for the polynomial functor

  P(X) = \sum_{k \in N} N x FAN^k

I find this presentation conceptually cleaner than the usual encoding
of spreads as sets of Goedel codes of finite sequences of natural
numbers (of course, this presentation requires a richer type theory).
Also, this definition is easily adapted to fans and spreads labeled by
elements of any set A (just replace N x FAN^k with A x FAN^k), and of
any branching type.

By the way, FAN is isomorphic to N^N, and N^N is the final coalgebra
for the functor P(X) = N x X.


Example 2:

The initial algebra for P(X) = 1 + X is the set of natural numbers.

The final coalgebra N^+ for this functor is best thought of as the
one-point compactification of the natural numbers, as it consists of
the natural numbers together with one extra point "at infinity". In
classical set theory, this is no different from natural numbers, but
in a constructive setting the point at infinity is not isolated. If
one wants to prove anything interesting about N^+, just about the only
way to do it is to use coinduction and corecursion. It's a good
exercise.

This is neat because we get a one-point compactification of N without
any reference to topology.

If we compute N^+ in PER(D), where D is some topological model of the
untyped lambda calculus, such as P(omega) or the universal Scott
domain, we get _precisely_ the one-point compactification of natural
numbers.

In the effective topos N^+ can be described as an equivalence relation
on partial recursive functions, where two such functions are
equivalent if, and only if, they terminate in the same number of steps
(when applied to some fixed input) or they diverge. The divergent
functions represent the point at infinity, and those terminating in
exactly k steps represent the number k.


Andrej



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: coinduction
  2000-10-27 20:40 coinduction Jesse F. Hughes
  2000-10-28  9:23 ` coinduction: definable equationally? Vaughan Pratt
@ 2000-10-30  3:38 ` Dusko Pavlovic
  1 sibling, 0 replies; 6+ messages in thread
From: Dusko Pavlovic @ 2000-10-30  3:38 UTC (permalink / raw)
  To: Categories@Mta. Ca

> > but wouldn't it be nice to keep the standard definition of
> > induction, which says that the equations
> >
> >     f(0) = c
> >     f(n+1) = g(f(n))
> >
> > determine a unique f, given c and g? this is the definition that most
> first year

> I have always considered the statement regarding existence of a unique

> function f to be the principle of recursion.

the recursion schema, as defined by godel in 1930, is a bit  stronger: g is
allowed to depend on n, not only on f(n).

> The principle of
> induction is, in my mind, the weaker principle that says that if a
> property holds of 0 and is closed under successor, then it holds of N.

in a topos, this principle is equivalent with the above schema. and the
schema itself, of course, just means that [0,s] : 1+N --> N is initial: any
algebra [c,g] : 1+X --> X induces a unique homomorphism f : N-->X. using the
exponents, we do recursion.

as i said last time, induction and recursion have been in heavy use for so
long, that they really don't leave much space for redefining, even for the
sake of symmetry with coinduction and corecursion.

-- dusko




^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: coinduction: definable equationally?
  2000-10-28  9:23 ` coinduction: definable equationally? Vaughan Pratt
  2000-10-28 20:54   ` Jesse F. Hughes
@ 2000-11-02 17:09   ` Luigi Santocanale
  1 sibling, 0 replies; 6+ messages in thread
From: Luigi Santocanale @ 2000-11-02 17:09 UTC (permalink / raw)
  To: Categories@Mta. Ca

Vaughan Pratt wrote:

> We may now reword the original question in the light of the foregoing.
> To what variety can one similarly associate suitable operations fully
> defined equationally so as to include an equationally expressed principal
> of coinduction?

In the same way an induction principle is expressible by means of a
least fixed point, one could expect coinductive principles to be
expressible by means of greatest fixed points.

The following example will clarify what I have in mind.
Let G,H two graphs, then we can construct the graph G*H, where 

(G*H)_0 = G_0 x H_0 

and 

(G*H)_1 = (G_1 x H_0) + (G_0 x H_1) . 

Every transition of  G*H is labeled: it is either a left transition or a
right transition. We obtain in this way two modal operators on the
subsets of G_0 x H_0, call them <l> and <r>. A bisimulation is a subset
B of G_0 x H_0 such that

B <= [r]<l>B and B <= [l]<r> , 

where the [] are the duals of <>. If G = H, then saying that G has no
proper quotient amounts to saying that the greatest bisimulation is
equal to the diagonal D. Therefore a principle of coinduction can be
expressed as follows:

\nu_z.([r]<l>z \wedge [l]<r>z) = D 

where \nu_z.f(z) is the greatest fixed point of f(z). The example can be
generalized. 

The following question arises: is the greatest postfixed point definable
equationally? The answer is yes, at least in several cases, as it is the
least prefixed point.

Consider a theory which contains the signature <0,+,.,\>. <0,+> satisfy
the semilattice axioms and, with respect to induced order, a.x is left
adjoint to a\x, for a fixed a. Let f(z) be any term of the theory, then
the following holds:

g(x) is the parameterized least prefixed point of f(z).x if and only if
the equations

f(g(x)).x = g(x)
g(x) <= g(x+y)
g(f(x)\x) <= x

hold. 

It is an easy exercise to check this is true, using the fact that
f(x).(f(x)\x) <= x . If . has a right unit 1, then g(1) is the least
prefixed point of f(z). Similar results hold for the greatest postfixed
point if we can find an operation . with a parameterized right adjoint. 

Algebraic models of the propositional $\mu$-calculus form a variety, as
the models of PDL do, and hopefully coinductive principles can be
expressed. One could also define something like a $\mu$-linear logic,
its models would form a variety. 

Best regards,

	Luigi

-- 
Luigi Santocanale,

BRICS				
Department of Computer Science	Telephone:   +45 8942 3288
University of Aarhus		Telefax:     +45 8942 3255
Ny Munkegade, building 540	http://www.brics.dk/~luigis/
DK - 8000 Århus C, Denmark.	mailto:luigis@brics.dk



^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2000-11-02 17:09 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2000-10-27 20:40 coinduction Jesse F. Hughes
2000-10-28  9:23 ` coinduction: definable equationally? Vaughan Pratt
2000-10-28 20:54   ` Jesse F. Hughes
2000-10-30  0:32     ` Andrej Bauer
2000-11-02 17:09   ` Luigi Santocanale
2000-10-30  3:38 ` coinduction Dusko Pavlovic

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).