Could you explain why you want to use normal forms and hereditary substitution--rather than more typical typing rules--if you expect to need an infinite tower of coherences anyway? Needing the coherence tower seems like it will make the definition dramatically more complicated, so wouldn't it be best to base it on the most simple and familiar typing rules?

Also, mightn't there be some clever way to "automatically" specify all the coherences by referring to the strict equational theory? From what little I read about higher operads, it seemed like they are doing that. If you need an infinite set of constructors, then it seems you need some strict "base language" to formalize the pattern. It should be able to use clever, indirect definitions.

On Monday, October 16, 2017 at 1:02:02 PM UTC-4, Michael Shulman wrote:
On Mon, Oct 16, 2017 at 8:00 AM, Michael Shulman <shu...@sandiego.edu> wrote:
>> 1. Give bidirectional typing rules to ensure only beta-normal, eta-long
>>    terms are typeable.
>> 2. Hence, a conversion rule can be omitted, since all terms (including
>>    types) are in normal form.
>> 3. Prove a bunch of lemmas, eventually culminating in proofs of
>>    (a) hereditary substitution and (b) identity expansion. (This
>>    basically ends up making normalization part of the definition of
>>    substitution.)
>
> Yes, that's what I'm proposing.

Let me expand on that a little under a new subject line, since I've
played around with it some already and I think I have a fairly clear
picture of how it should go.

In a dependently typed bidirectional system, hereditary substitution
needs to be mutually defined along with the typing judgments.  In
principle it might be possible to do this inductive-recursively, but I
would want to instead define the graph of hereditary substitution
inductive-inductively and then afterwards prove it to be a total
function, since inductive-inductive types are easier to make sense of
homotopically and generally (in my experience) have a more useful
induction principle.  (To reply to Matt, I am not considering any kind
of impredicativity; the axiomatic version could be added later.)

However, when doing this intrinsically (which is the direct way to get
a useful induction principle) and without h-level restriction, I found
that there seems to be a new problem: you end up needing to substitute
hereditarily not just one term but a whole context morphism, and then
in defining the clauses for the graph of that hereditary substitution
you need to know already that it is associative.  So you add another
judgment for the graph of its associativity, but in defining its
clauses you need to know that the associativity is coherent, and so
on.  The result is an inductive-inductive definition with infinite
dependency, which we don't know how to make sense of internally.  But
if we can make sense of it (or make it finite by capping the h-level
somewhere), then this tower of dependent inductive-inductive types
ought to present the entire (semi)simplicial nerve of the syntactic
category of the type theory (probably in the form of something like a
"comprehension quasicategory").

The induction principle of this inductive-inductive definition should
then let us interpret it into an arbitrary sufficiently structured
quasicategory, or even an internally defined "complete Segal space" if
we do it in a homotopy type theory, such as the canonical universe,
thereby solving the autophagy problem.  On the other hand, since it
has no path-constructors, an encode-decode argument should prove that
it actually consists of h-sets, which should then allow us to
interpret untyped syntax into it.

Of course this is a VERY rough sketch and there are a lot of ways it
could go wrong, plenty of which I expect people will point out.  (-:

Mike