categories - Category Theory list
 help / color / mirror / Atom feed
* coinduction
@ 2000-10-20 12:57 Al Vilcius
  2000-10-20 15:35 ` coinduction Hendrik Tews
                   ` (4 more replies)
  0 siblings, 5 replies; 10+ messages in thread
From: Al Vilcius @ 2000-10-20 12:57 UTC (permalink / raw)
  To: Categories@Mta. Ca

can anyone explain coinduction?
in what sense it dual to induction?
does it relate to the basic picture for a NNO?
is there a (meaningful) concept of co-recursion?
what would be the appropriate internal language?
how is it used to prove theorems?

Al Vilcius
al.r@vilcius.com





^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: coinduction
  2000-10-20 12:57 coinduction Al Vilcius
@ 2000-10-20 15:35 ` Hendrik Tews
  2000-10-20 22:01 ` coinduction Jesse F. Hughes
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 10+ messages in thread
From: Hendrik Tews @ 2000-10-20 15:35 UTC (permalink / raw)
  To: categories

Hi,

Al Vilcius writes:
   From: "Al Vilcius" <al.r@VILCIUS.com>
   Date: Fri, 20 Oct 2000 08:57:20 -0400
   Subject: categories: coinduction
   
   can anyone explain coinduction?

Have a look at 

   Bart Jacobs and Jan Rutten
   A Tutorial on (Co)Algebras and (Co)Induction
   Bulletin of the EATCS, Vol. 62, pp. 222-259, 1996.   
   
it is available on the authors homepages, for instance
http://www.cs.kun.nl/~bart/PAPERS/JR.ps.Z

Bye,

Hendrik



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: coinduction
  2000-10-20 12:57 coinduction Al Vilcius
  2000-10-20 15:35 ` coinduction Hendrik Tews
@ 2000-10-20 22:01 ` Jesse F. Hughes
  2000-10-21  1:19 ` coinduction Charles Wells
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 10+ messages in thread
From: Jesse F. Hughes @ 2000-10-20 22:01 UTC (permalink / raw)
  To: Categories@Mta. Ca

"Al Vilcius" <al.r@VILCIUS.com> writes:

> can anyone explain coinduction?
> in what sense it dual to induction?
> does it relate to the basic picture for a NNO?
> is there a (meaningful) concept of co-recursion?
> what would be the appropriate internal language?
> how is it used to prove theorems?

The principle of induction says that initial algebras have no
non-trivial subalgebras.  Consider N, the initial successor algebra.
A subobject S of N is a subalgebra just in case there is a structure
map
s: S+1 -> S
such that the inclusion S -> N is a homomorphism.  This is just the
usual condition that 0 is in S and if n is in S, then so is its
successor.  Induction says that any such subalgebra exhausts the set
N.

Coinduction says, dually, that the final coalgebra has no non-trivial
quotients.  (To see that this is "really" a dual principle, we would
interpret "subalgebra" above as a regular subobject in the category of
algebras.)  However, the principle is usually stated in terms of
relations on a coalgebra, rather than quotients.  In these terms, the
principle of coinduction says: Any relation on <F,f> in the category
of coalgebras is a subrelation of equality.

Note: I stated the principles of induction/coinduction above for the
initial algebra/final coalgebra, resp.  However, just as induction
holds for any quotient of the initial algebra, coinduction holds for
any (regular) subcoalgebra of the final coalgebra.

I'm not sure what you're looking for in relation to NNOs.  An NNO is
the same as an initial algebra for the successor functor (assuming
that our category has coproducts).  The structure map N + 1 -> N is
given by [s,0], where s is the successor.  The final coalgebra for
successor is easily described (say, in Set):  It is the NNO with a
point adjoined for "infinity".  The structure map pred:F -> F + 1 fixes
infinity, takes n+1 to n and takes 0 to * (the element in 1 -- a kind
of "error condition").

As far as co-recursion goes, yes, there is a meaningful principle.
Namely, given any coalgebra <A,a>, there is a unique coalgebra
homomorphism from <A,a> to <F,f>.  Thus, for any set S together with a
function t:S -> S+1, there is a unique map h:S -> N u {infinity} such
that 
(1) if t(s) = *, then h(s) = 0
(2) else, h(t(s)) = h(s) - 1 (where infinity - 1 = infinity)
In other words, h here takes an element s to the least n such that
t^n(s) = *, if defined.  If, for all n, t^n(s) != *, then h(s) =
infinity.

I'll defer on the question of what the appropriate internal language
is.

As far as how coinduction is used to prove theorems: To show two
elements of the final coalgebra are equal, it suffices to find a
coalgebraic relation relating the elements.

Hendrik's suggestion (that you find Jacobs and Rutten's paper) is a
good one, but I hope these comments help until then.

-- 
Jesse Hughes
"Such behaviour is exclusively confined to functions invented by
mathematicians for the sake of causing trouble."
 -Albert Eagle's _A_Practical_Treatise_on_Fourier's_Theorem_



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: coinduction
  2000-10-20 12:57 coinduction Al Vilcius
  2000-10-20 15:35 ` coinduction Hendrik Tews
  2000-10-20 22:01 ` coinduction Jesse F. Hughes
@ 2000-10-21  1:19 ` Charles Wells
  2000-10-26 23:49 ` coinduction Dusko Pavlovic
  2000-10-30 15:55 ` coinduction Jeremy Gibbons
  4 siblings, 0 replies; 10+ messages in thread
From: Charles Wells @ 2000-10-21  1:19 UTC (permalink / raw)
  To: categories

In brief:  If a structure has no nontrivial substructures, you can prove a
property P is true of everything in the structure by proving that the elements
with property P form a nonempty substructure.  Take the structure to be N with
the unary operation of successor and you get induction.  Now say that in the
opposite category:  If a structure has no nontrivial congruences, you can prove
that two objects are equal if they are equivalent under any congruence.  (You
can define "definition by coinduction" by a similar dualization.)  This has
found many applications in computer science (where the congruence is
bisimulation).  See J.J.M.M. Rutten, Universal coalgebra: a theory of systems.
Theoretical Computer Science 249(1), 2000, pp. 3-80 and other papers by him on
his website:  http://www.cwi.nl/~janr/papers/ 

--Charles Wells

>can anyone explain coinduction?
>in what sense it dual to induction?
>does it relate to the basic picture for a NNO?
>is there a (meaningful) concept of co-recursion?
>what would be the appropriate internal language?
>how is it used to prove theorems?
>
>Al Vilcius
>al.r@vilcius.com
>
>
>



Charles Wells, 105 South Cedar St., Oberlin, Ohio 44074, USA.
email: charles@freude.com. 
home phone: 440 774 1926.  
professional website: http://www.cwru.edu/artsci/math/wells/home.html
personal website: http://www.oberlin.net/~cwells/index.html
NE Ohio Sacred Harp website: http://www.oberlin.net/~cwells/sh.htm




^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: coinduction
  2000-10-20 12:57 coinduction Al Vilcius
                   ` (2 preceding siblings ...)
  2000-10-21  1:19 ` coinduction Charles Wells
@ 2000-10-26 23:49 ` Dusko Pavlovic
  2000-10-30 15:55 ` coinduction Jeremy Gibbons
  4 siblings, 0 replies; 10+ messages in thread
From: Dusko Pavlovic @ 2000-10-26 23:49 UTC (permalink / raw)
  To: Categories@Mta. Ca

Al Vilcius asks:

> > can anyone explain coinduction?
> > in what sense it dual to induction?

and most people reply along the lines of:

> The principle of induction says that initial algebras have no
> non-trivial subalgebras.

> Coinduction says, dually, that the final coalgebra has no
non-trivial
> quotients.

but wouldn't it be nice to keep the standard definition of
induction, which says that the equations

    f(0) = c
    f(n+1) = g(f(n))

determine a unique f, given c and g? this is the definition that
most first year students study, and many programmers use every
day, usually without the slightest idea about subalgebras, or
triviality. do we really gain anything by replacing this simple
scheme by the higher-order statement about all subalgebras? ok,
we do get the appearance of symmetry between induction and
coinduction. but is the higher-order statement about quotients
really the friendliest form of coinduction, suitable to be taken
as the definition?

i think barwise and moss somewhere describe coinduction as
*induction without the base case*. the coinduction principle
says that the equation

    X = a.X + bc.X

determines a unique CCS-process, or a unique {a,b,c}-labelled
hyperset. it also guarantees that the integral equation

    E(x) = 1 + \int_0^x E dt

has a unique solution. (note that this is equivalent to the
differential equation

    E(0) = 1
    E = E'

where the initial condition gives the base case for the power
series solution. so the no-base-case slogan is just a matter of
form? but note that  E' = E, and more generally E' = h(E), for
an analytic h, identifies two *infinite* streams, and is not a
mere inductive step up the stream.)

in any case, i contend that the practice of coinduction
primarily consists of solving *guarded equations*, like the two
above --- surely if you count the areas where people extensively
use coinduction, without calling it by name, viz math analysis.

((it is true that computer scientists became aware of
coinduction while working out the bisimulations etc, which is
why the quotients came in light. but now it seems that the
bisimulations were just the way to construct final coalgebras;
whereas the coinduction, in analogy with the induction, should
be a practically useful logical principle, available over the
*given* final coalgebras.))

anyway, the heart of the matter probably lies in peter aczel's
insight that the statement

     * V ---> PV is a final coalgebra for P = (finite) powerset
functor

equivalently means that

    * every accessible pointed graph has a unique decoration in
V.

presenting the graph definitions as systems of guarded equations
(like in barwise and moss' book "vicious circles"), the above
equivalence generalizes to the statement that

    * V --d--> FV is a final coalgebra for F

iff

    * every system of guarded equations over V has a unique
solution.

where the notion of guarded is determined by F and d. this is,
of course, just barwise and moss' solution lemma, lifted to F.
or more succinctly : **V is a final fixpoint iff each guarded
operation g:V-->V has a unique fixpoint**

this, of course, supports the view that ***coinduction is
solving guarded equations***...

i was hoping to say more, but will have to leave it for later,
and post this hoping to motivate discussion.

unguardedly yours,
-- dusko pavlovic

PS i have a great appreciation for jan rutten's work on
coinduction, but it seems to me that the title of *universal
coalgebra* is a bit exaggerated. universal algebra does not
dualize. by design, it is meant to be model theory of algebraic
theories. what would be the "coalgebraic theories", leading to
coalgebras as their models? what would their functorial
semantics look like?  (implicitly, these questions are already
in manes' book.)

and then, moving from coalgebraic theories to comonads, like
from algebraic theories to monads, should presumably yield an
HSP-like characterization for the categories of (comonad)
coalgebras. but coalgebras for a comonad over a topos usually
form just another topos.

i may be missing something essential here, but then i do stand
to be corrected.




^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: coinduction
  2000-10-20 12:57 coinduction Al Vilcius
                   ` (3 preceding siblings ...)
  2000-10-26 23:49 ` coinduction Dusko Pavlovic
@ 2000-10-30 15:55 ` Jeremy Gibbons
  4 siblings, 0 replies; 10+ messages in thread
From: Jeremy Gibbons @ 2000-10-30 15:55 UTC (permalink / raw)
  To: categories

> can anyone explain coinduction?
> in what sense it dual to induction?

We've heard some very erudite answers to these questions. I
don't know the questioner's background, but here is a
different answer, in case it helps.

In the algebraic approach to functional programming, an
initial datatype of a functor F is a datatype T and an
operation in : F T -> T for which the equation in h

  h . in = f . F h

has a unique solution for given f. We write "fold_F f" for
that unique solution, so

  h = fold_F f  <=>  h . in = f . F h

This is an inductive definition of fold, and the above
equivalence (the "universal property for fold") encapsulates
proof by structural induction.

(For example, when F X = 1+X, then T is N, the naturals. The
injection in : 1+N->N is the coproduct morphism [0,1+].
Folds on naturals are functions of the form

  fold_{1+} [z,s] 0     = z
  fold_{1+} [z,s] (1+n) = s (fold_{1+} [z,s] n)

To prove that a predicate p holds for every natural n is
equivalent to proving that the function p : N -> Bool is
equal to the function alwaystrue : N -> Bool that always
returns true. Now alwaystrue is a fold,

  alwaystrue = fold_{1+} [true,step]
    where step true = true

Therefore, to prove that p holds for every natural, we can
use the universal property:

      predicate p holds of every natural
  <=>
      p = alwaystrue
  <=>
      p = fold_{1+} [true,step]
  <=>
      p . [0,1+] = [true,step] . id+p
  <=>
      p(0) = true  and  p(1+n) = step(p(n))
  <=>
      p(0) = true  and  (p(1+n) = true when p(n) = true)

which is the usual principle of mathematical induction.)

Dually, a final datatype of a functor F is a datatype T and
an operation out : T -> F T for which the equation in h

  out . h = F h . f

has a unique solution for given f. We write "unfold_F f" for
that unique solution, so

  h = unfold_F f  <=>  out . h = F h . f

This is a coinductive definition of unfold, and the above
equivalence (the "universal property for unfold")
encapsulates proof by structural coinduction.

> how is it used to prove theorems?

Exactly the same way. For example, the datatype Stream A of
streams of A's is the final datatype of the functor taking X
to A x X. An example of an unfold for this type is the
function iterate, for which

  iterate f x = [ x, f x, f (f x), f (f (f x)), ... ]

defined by

  iterate f = unfold_{A x} <id,f>

One might expect that

  iterate f . f = Stream f . iterate f

and the proof of this fact is a straightforward application
of the universal property of unfold (that is, a proof by
coinduction).

Jeremy

-- 
Jeremy.Gibbons@comlab.ox.ac.uk
  Oxford University Computing Laboratory,    TEL: +44 1865 283508
  Wolfson Building, Parks Road,              FAX: +44 1865 273839
  Oxford OX1 3QD, UK.  
  URL: http://www.comlab.ox.ac.uk/oucl/people/jeremy.gibbons.html



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: coinduction
@ 2000-10-30 23:09 Jesse F. Hughes
  0 siblings, 0 replies; 10+ messages in thread
From: Jesse F. Hughes @ 2000-10-30 23:09 UTC (permalink / raw)
  To: Categories@Mta. Ca

Dusko Pavlovic <dusko@kestrel.edu> writes:

> 
> > I have always considered the statement regarding existence of a
> > unique function f to be the principle of recursion.
> 
> the recursion schema, as defined by godel in 1930, is a bit
> stronger: g is allowed to depend on n, not only on f(n).

Yes, I overlooked that point. But, is that principle any stronger than
the principle you called induction?  I think it isn't, given products
in our category.  

Let N be the initial successor algebra.  Let c be in X and 
g:N x X -> X. To define a function f such that
f(0) = c
f(n+1) = g(n,f(n)),
I just have to cook up the appropriate structure map for N x X.  I
think that
<s o p_1,g> + <0,c>: (N x X) + 1 -> N x X
does it (p_1 is the projection map).  Let <k,f>: N -> N x X be the
unique function guaranteed by initiality.  One can show that f
satisfies the equations above.  (Along the way, one shows that k is
the identity.)

(To add other parameters, one needs exponentials.  For instance,
given
h:A -> X
g:A x N x X -> X
to show that there is a unique f such that
f(a,0) = h(a)
f(a,n+1) = g(a,n,f(a,n))
requires a structure map for (X x N)^A, if I'm not mistaken.)

This shows that Godel's principle of recursion is equivalent to the
statement "N is an initial algebra".  I.e., it is equivalent to the
statement that there is a unique f such that
f(0) = c
f(n+1) = g(f(n)).

> > The principle of
> > induction is, in my mind, the weaker principle that says that if a
> > property holds of 0 and is closed under successor, then it holds of N.
> 
> in a topos, this principle is equivalent with the above schema. and
> the schema itself, of course, just means that [0,s] : 1+N --> N is
> initial: any algebra [c,g] : 1+X --> X induces a unique homomorphism
> f : N-->X. using the exponents, we do recursion.

I am confused by this statement.  My principle of induction for
successor algebras [0_X,s_X]:1+X -> X is this: For all subsets P of X,
if 0_X is in P and P is closed under s_X, then P = X.  (Categorically,
an algebra satisfies the principle of induction if it contains no
proper subalgebras.)

Consider the successor algebra X={x}, with the evident structure map 
[0_X,s_X]:1+X -> X.  
Surely, for every subset P of X, if 0_X is in P and P is closed under
s_X, then P = X.  In other words, the algebra X satisfies the principle
of induction in the sense that I gave.  This seems to contradict the
equivalence you mention.  What am I overlooking?

> as i said last time, induction and recursion have been in heavy use
> for so long, that they really don't leave much space for redefining,
> even for the sake of symmetry with coinduction and corecursion.

I quite agree, but I didn't redefine induction, as far as I am
concerned.  I learned that induction was the proof principle as I
stated it and that recursion is the principle regarding the definition
of functions many years ago.  I think that our disagreement in
definitions is a disagreement that one will find elsewhere in the
literature.  I also suppose that the term coinduction (as I use it)
arose from the definition of induction that I offered.

Are others unfamiliar with the distinction between induction and
recursion as I drew it?  I assumed it was a more or less conventional
distinction.  If my use is really idiosyncratic, then I'd be happy to
drop it.

-- 
Jesse Hughes
"Such behaviour is exclusively confined to functions invented by
mathematicians for the sake of causing trouble."
 -Albert Eagle's _A Practical Treatise on Fourier's Theorem_



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: coinduction
  2000-10-27 20:40 coinduction Jesse F. Hughes
@ 2000-10-30  3:38 ` Dusko Pavlovic
  0 siblings, 0 replies; 10+ messages in thread
From: Dusko Pavlovic @ 2000-10-30  3:38 UTC (permalink / raw)
  To: Categories@Mta. Ca

> > but wouldn't it be nice to keep the standard definition of
> > induction, which says that the equations
> >
> >     f(0) = c
> >     f(n+1) = g(f(n))
> >
> > determine a unique f, given c and g? this is the definition that most
> first year

> I have always considered the statement regarding existence of a unique

> function f to be the principle of recursion.

the recursion schema, as defined by godel in 1930, is a bit  stronger: g is
allowed to depend on n, not only on f(n).

> The principle of
> induction is, in my mind, the weaker principle that says that if a
> property holds of 0 and is closed under successor, then it holds of N.

in a topos, this principle is equivalent with the above schema. and the
schema itself, of course, just means that [0,s] : 1+N --> N is initial: any
algebra [c,g] : 1+X --> X induces a unique homomorphism f : N-->X. using the
exponents, we do recursion.

as i said last time, induction and recursion have been in heavy use for so
long, that they really don't leave much space for redefining, even for the
sake of symmetry with coinduction and corecursion.

-- dusko




^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: coinduction
@ 2000-10-28 19:39 Mamuka Jibladze
  0 siblings, 0 replies; 10+ messages in thread
From: Mamuka Jibladze @ 2000-10-28 19:39 UTC (permalink / raw)
  To: categories

Hi all,

although I cannot say much on coinduction itself, I would like to mention
one possible point of view on "universal coalgebra" commented on by Dusko at
the end of his message:

> PS i have a great appreciation for jan rutten's work on
> coinduction, but it seems to me that the title of *universal
> coalgebra* is a bit exaggerated. universal algebra does not
> dualize. by design, it is meant to be model theory of algebraic
> theories. what would be the "coalgebraic theories", leading to
> coalgebras as their models? what would their functorial
> semantics look like?  (implicitly, these questions are already
> in manes' book.)
>
> and then, moving from coalgebraic theories to comonads, like
> from algebraic theories to monads, should presumably yield an
> HSP-like characterization for the categories of (comonad)
> coalgebras. but coalgebras for a comonad over a topos usually
> form just another topos.
>
> i may be missing something essential here, but then i do stand
> to be corrected.

I would say that, although there is no straightforward dualization, there
might be a "hidden" one. In my opinion, comonads can sometimes play the role
of "strengtheners" or "enrichers" for monads, so that while monads can be
viewed as a "substrate for structure", comonads are "substrate for
semantics".

Rather than trying to give sense to this vague phrase, let me give three
illustrations.

-1-
S. Kobayashi,
Monad as modality.
Theoret. Comput. Sci. 175 (1997), no. 1, 29--74.

The starting point in that paper is the monad semantics by Moggi.
It seems that for computer-scientific purposes strength of the monad is too
restrictive a requirement, and the author modifies the semantics by assuming
strength of the monad *only*with* respect*to*a*comonad*. This means that the
monad and comonad distribute in such a way that the monad descends to a
strong monad on the category of coalgebras over the comonad. Under the
Curry-Howard correspondence this yields an interesting intuitionistic
version of the modal system S4. A realizability interpretation is also
given.

-2-
Freyd, Yetter, Lawvere, Kock, Reyes and others have studied "atoms", or
infinitesimal objects - objects I in a cartesian closed category with the
property that the exponentiation functor I->(_) has a right adjoint. This
induces a monad, which cannot be strong in a nontrivial way, just as in the
previous example. There is a "largest possible" subcategory over which it is
strong, namely the category of I-discrete objects, i.e. those objects X for
which the adjunction unit from X to I->X is iso. In good cases the inclusion
of this subcategory is comonadic. As Bill pointed out, one might view
Cantor's idea of set theory as a case of this: the universe of sets is the
largest - necessarily "discrete" - part of "analysis" over which the latter
can be "enriched" (well, this is awfully inaccurate, sorry -- consider it as
a metaphor).

-3-
In differential homotopical algebra, there was developed a notion of torsor
(principal homogeneous object) which works in a non-cartesian monoidal
category. This notion requires not just a (right) action of a monoid M on an
object P, but also a "cartesianizer" in form of a left coaction of a
comonoid C on P which commutes with the action, and is principal in an
appropriate sense. It turns out that the monoid and the comonoid enter in an
entirely symmetric way. In the monoidal category of differential graded
modules over a commutative ring, principal C-M-objects are classified by
s.c. twisting cochains from C to M. Moreover in that category, for a monoid
M there is a universal choice of P and C given by the Eilenberg-Mac Lane bar
construction, and for a comonoid C there is a universal choice of P and M,
given by the Adams' cobar construction. I would be grateful for a reference
to a version of these for general monads/comonads.

Regards,
Mamuka




^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: coinduction
@ 2000-10-27 20:40 Jesse F. Hughes
  2000-10-30  3:38 ` coinduction Dusko Pavlovic
  0 siblings, 1 reply; 10+ messages in thread
From: Jesse F. Hughes @ 2000-10-27 20:40 UTC (permalink / raw)
  To: Categories@Mta. Ca


I'm afraid that I've never quite understood the notion of guarded
equations, so I cannot comment on much of what you wrote below without
some more time and more thought.  Let me make a few introductory
comments and perhaps I can say more later.

Dusko Pavlovic <dusko@kestrel.edu> writes:

> Al Vilcius asks:
> 
> > > can anyone explain coinduction?
> > > in what sense it dual to induction?
> 
> and most people reply along the lines of:
> 
> > The principle of induction says that initial algebras have no
> > non-trivial subalgebras.
> 
> > Coinduction says, dually, that the final coalgebra has no
> non-trivial
> > quotients.
> 
> but wouldn't it be nice to keep the standard definition of
> induction, which says that the equations
> 
>     f(0) = c
>     f(n+1) = g(f(n))
> 
> determine a unique f, given c and g? this is the definition that
> most first year students study, and many programmers use every
> day, usually without the slightest idea about subalgebras, or
> triviality. do we really gain anything by replacing this simple
> scheme by the higher-order statement about all subalgebras? ok,
> we do get the appearance of symmetry between induction and
> coinduction. but is the higher-order statement about quotients
> really the friendliest form of coinduction, suitable to be taken
> as the definition?

I have always considered the statement regarding existence of a unique
function f to be the principle of recursion.  The principle of
induction is, in my mind, the weaker principle that says that if a
property holds of 0 and is closed under successor, then it holds of N.
I identify "properties" with subobjects here.  In terms of
homomorphisms, this is equivalent to the "uniqueness part" of
initiality (assuming that our category has some epi-mono factorization
property, I suppose).

One may question whether it's appropriate to take the principle of
induction to apply to all properties or only those properties which
are definable in some sense -- I guess you've done as much when you
refer to the "higher-order statement" above.  I'm not sure to what
extent my use of all subalgebras differs from your quantification over
all g and c in the statement of recursion above, however.

When we separate the properties of induction and recursion in the way
I mention above, by the way, then (conceptually) induction is a
property of those algebras A in which each element is picked out by a
(not necessarily unique) term of our language.  I.e., those A for
which there is an epi from the initial algebra to A.  Hence, induction
in my sense is weaker than recursion (induction in your sense).

Regarding whether a statement about quotients is really the
"friendliest form" of coinduction, I'll have to agree that it is not,
if one's aim is to describe coinduction as it is commonly used.  I
chose to describe it in the terms above because these terms make clear
the "co" in "coinduction".

I did gloss over details that ensure the equivalence of the "familiar
principle" of coinduction and the principle regarding quotients.  If
the base category has kernel pairs of coequalizers and G preserves
weak pullbacks, then a G-coalgebra A has no non-trivial quotients iff the
equality relation is the largest coalgebraic relation on A (I hope
I've got that right).  Inasmuch as the principle I called coinduction
is not always equivalent to the familiar principle, I was being a bit
misleading.  However, I think it is the right approach if one's goal
is to explain the relationship between induction and coinduction.

I'm not sure that the principle regarding the equality relation is
what you'd call coinduction, however.  I think that the principle you
have in mind is what I'd call "corecursion".  Analogous to the
distinction between recursion and induction, I omit the existence part
of finality in what I call "coinduction" and I use the uniqueness
part.
 
> 
> and then, moving from coalgebraic theories to comonads, like
> from algebraic theories to monads, should presumably yield an
> HSP-like characterization for the categories of (comonad)
> coalgebras. but coalgebras for a comonad over a topos usually
> form just another topos.
> 

But, there *are* such characterizations of categories of coalgebras,
if I understand your point.  Namely, a full subcategory of coalgebras
is closed under coproducts, codomains of epis and domains of regular
monos iff it is the class of coalgebras satisfying a "coequation" --
given, here, some boundedness conditions on the functor so that things
work out well.  For brevity's sake, and because you're likely familiar
with this result, I'll omit details on what a coequation is and how
it's satisfied.

By the way, about that last statement regarding coalgebras "usually"
forming a topos:  Doesn't one need that the endofunctor preserve
pullbacks, or am I missing something?

-- 
Jesse Hughes
"The only "intuitive" interface is the nipple.  After that, it's all
learned."                             -Bruce Ediger on X interfaces
"Nipples are intuitive, my ass!"      -Quincy Prescott Hughes



^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2000-10-30 23:09 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2000-10-20 12:57 coinduction Al Vilcius
2000-10-20 15:35 ` coinduction Hendrik Tews
2000-10-20 22:01 ` coinduction Jesse F. Hughes
2000-10-21  1:19 ` coinduction Charles Wells
2000-10-26 23:49 ` coinduction Dusko Pavlovic
2000-10-30 15:55 ` coinduction Jeremy Gibbons
2000-10-27 20:40 coinduction Jesse F. Hughes
2000-10-30  3:38 ` coinduction Dusko Pavlovic
2000-10-28 19:39 coinduction Mamuka Jibladze
2000-10-30 23:09 coinduction Jesse F. Hughes

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).