This is very exciting! Around the time when Thierry sent his message I did a little experiment where I made comp compute to a value in cubicaltt (this means that it is not defined by cases on the type as in the CCHM paper): https://github.com/mortberg/cubicaltt/tree/compval What I then realized was that the full proof of univalence (using that unglue is an equivalence) goes through as it is: https://github.com/mortberg/cubicaltt/blob/compval/examples/compval.ctt#L330 The same holds for the proof of univalence using Id everywhere. This was a bit surprising to me at first as it shows that none of the computation rules for comp by case on the type matter in these proofs (note that we still have the equations for how "comp^i A [phi -> u] u0" computes under phi : FF). However some proofs did of course break, for example the direct proof of "uabeta" which relies on how comp in Glue computes in the CCHM cubical set model. So it seems to me like the system that Thierry describes in his message has many very good properties that have been missing or are unknown from the proposed type theories for univalent type theory so far, in particular: - It has a model in Kan simplicial sets and a constructive model in ("Dedekind"/distributive lattice) cubical sets - It satisfies homotopy canonicity so that we know that all terms of type nat "compute" to the same numerals in these models - It has Id types (with strict computation for J) and univalence is a theorem for these types - As comp doesn't compute the implementation of the type theory is much simpler and type checking is very fast compared to regular cubicaltt The only downside is that it doesn't have "strict" canonicity (not all closed terms of type nat are judgmentally equal to a numeral), but I don't think that this is too much of a problem in practice as we can implement a closed term evaluator inspired by the cubical set model for computing these numerals (much like how Coq has vm/native_compute for efficient computation of closed terms). Finally my little experiment indicates that having a comp which doesn't compute doesn't affect the practical usability of this system too much either. -- Anders PS: Note that I didn't remove reversals in my small experiment as it would require a bit more hacking and I didn't have time to do it, but there is no fundamental problem in doing this and from my experience with cartesian cubical type theory it is not too bad in practice either (we would drop reversals and instead have comp 0->1 and comp 1->0). On Monday, October 1, 2018 at 9:02:53 AM UTC-4, coquand wrote: > > Following the previous note on semantics of HIT in the simplicial set > model, I looked > a little more at the formal system which is similar to cubical type theory > (based on distributive > lattices) but where the filling operation is treated as a -constant- > (without any computation rule). > > This formal system has models in simplicial and cubical sets. > > It is possible to show for this system a canonicity theorem, similar to > Voevodsky’s conjecture: a closed > term of type Bool is -path equal- to 0 or 1. (With a similar statement for > natural numbers.) > > A corollary of this result is that the value of such a closed term is the > same in all models. Hence > we know that the computation of such a closed term in various extensions > where we can compute > the filling operation will give the same value (and the same value which > we get in simplicial set; so e.g. > Brunerie constant has to be +/-2 in the model based on de Morgan algebras). > > This is proved using the sconing technique as described in > > > https://golem.ph.utexas.edu/category/2013/04/scones_logical_relations_and_p.html > > As is stated there, the problem for using this technique to solve > Voevodsky’s conjecture is that there > s no apparent way to associate (in a “strict” way) a simplicial/cubical > set to a closed type in dependent > type theory with identity types. > > This issue disappears here thanks to the use of higher dimensional syntax. > > I wrote a proof which is > the result of several discussions with Simon Huber and Christian Sattler > on this topic. The proof can be done effectively using the cubical set > model, or non effectively using the simplicial > set model. > Maybe a similar result can be proved for simplicial/cubical tribes. > > Thierry > > > On 14 Sep 2018, at 16:16, Andrew Swan > > wrote: > > Thanks for writing out the note. I'll also make a few remarks about how my > more recent work connects with these things. > > 1. The initial Susp X algebras, and I think in fact all of the initial > algebras appearing in the Coquand-Huber-Mortberg paper, can be constructed > by combining finite colimits and W types with reductions (as appearing in this > paper ). From this it follows that they > exist not just in cubical sets and simplicial sets, but any topos that > satisfies WISC, and for the special case of presheaf categories with > locally decidable cofibrations they can be constructed without using WISC > or exact quotients. I think the latter can be viewed as a generalisation of > the Coquand-Huber-Mortberg definition. > > 2. In my post and again in the W-types with reductions paper I suggested > using Christian's generalised notion of lifting property for commutative > squares. I still think this works, but I now lean towards other ways of > looking at it. I didn't really emphasise this in the paper, but it follows > from the general theory that one obtains not just an awfs, but a fibred > awfs over the codomain fibration. This gives an awfs, and thereby a notion > of trivial cofibration and fibration for each slice category C/Y. Then > given a map f : X -> Y, we can either view it as a map in the slice > category C/1 and factorise it there or as a map into the terminal object in > the slice category C/Y. The latter is necessarily stable under pullback, > and I think works out the same as "freely adding a homogeneous filling > operator." > > Alternatively, it is also possible to use W types with reductions > directly, without going via the small object argument. In this case the > proofs in the Coquand-Huber-Mortberg paper generalise pretty much directly > with very little modification. > > 3. I still find the situation in simplicial sets a little strange, in > particular the need to switch back and forth between the different notions > of fibration, although it does work as far as I can tell. > > 4. I made some remarks before about universal lifting problems. These now > appear in the paper Lifting Problems in Grothendieck Fibrations > . > > Best, > Andrew > > > On Friday, 14 September 2018 13:15:58 UTC+2, coquand wrote: >> >> >> I wrote a short note to >> confirm Andrew’s message that indeed this technique >> works as well (using classical logic) for simplicial sets. This can now >> be presented >> as a combination of various published results. (The originality is only >> in the presentation; >> this essentially follows what is suggested in Andrew’s message.) >> This provides a semantics of e.g. suspension as an operation U -> U, >> where U >> is an univalent universe in the simplicial set model. >> >> Thierry >> >> >> On 7 Jun 2017, at 14:34, Andrew Swan wrote: >> >> So suspension (or more generally pushouts/coequalisers) is what would >>> make a really good test case for any proposed general approach — it’s the >>> simplest HIT which as far as I know hasn’t been modelled without a size >>> blowup in any infinite-dimensional model except cubical sets, under any of >>> the approaches to modelling HIT’s proposed so far. (Am I right in >>> remembering that this has been given for cubical sets? I can’t find it in >>> any of the writeups, but I seem to recall hearing it presented at >>> conferences.) >> >> >> The technique used in cubical type theory seems fairly flexible. I'm not >> sure exactly how flexible, but I think I can get suspension to work in >> simplicial sets. In the below, throughout I use the characterisation of >> fibrations as maps with the rlp against the pushout product of each >> monomorphism with endpoint inclusion into the interval. WLOG there is also >> a uniformity condition - we have a choice of lift and "pulling back the >> monomorphism preserves the lift." >> >> Given a fibration X -> Y, you first freely add elements N and S together >> with a path from N to S for each element of X (I think this is the same as >> what you called pre suspension). Although the pre suspension is not a >> fibration in general, it does have some of the properties you might expect >> from a fibration. Given a path in Y, and an element in the fibre of an >> endpoint, one can transport along the path to get something in the fibre of >> the other endpoint. There should also be a "flattening" operation that >> takes a path q in presuspension(X) over p in Y, and returns a path from >> q(1) to the transport along p of q(0) that lies entirely in the fibre of >> p(1). >> >> You then take the "weak fibrant replacement" of the pre suspension. A >> map in simplicial sets is a fibration if and only if it has the rlp against >> each pushout product of a monomorphism with an endpoint inclusion into the >> interval. In fibrant replacement you freely add a diagonal lift for each >> such lifting problems. In weak fibrant replacement you only add fillers for >> some of these lifting problems. The pushout product of a monomorphism A -> >> B with endpoint inclusion always has codomain B x I - then only consider >> those lifting problems where the bottom map factors through the projection >> B x I -> B. I think there are two ways to ensure that the operation of weak >> fibrant replacement is stable under pullback - one way is carry out the >> operation "internally" in simplicial sets (viewed as a topos), and the >> other to use the algebraic small object argument, ensuring that uniformity >> condition above is in the definition. The intuitive reason why this should >> be stable is that the problem that stops the usual fibrant replacement from >> being stable is that e.g. when we freely add the transport of a point along >> a path, p we are adding a new element to the fibre of p(1) which depends on >> things outside of that fibre, whereas with weak fibrant replacement we only >> add a filler to an open box to a certain fibre if the original open box >> lies entirely in that fibre. >> >> In order to show that the suspension is fibrant one has to use both the >> structure already present in pre suspension (transport and flattening) and >> the additional structure added by weak fibrant replacement. The idea is to >> follow the same proof as for cubical type theory. It is enough to just show >> composition and then derive filling. So to define the composition of an >> open box, first flatten it, then use the weak fibration structure to find >> the composition. (And I think that last part should be an instance of a >> general result along the lines of "if the monad of transport and flattening >> distributes over a monad, then the fibrant replacement monad distributes >> over the coproduct of that monad with weak fibrant replacement"). >> >> >> Best, >> Andrew >> >> >> On Wednesday, 7 June 2017 11:40:12 UTC+2, Peter LeFanu Lumsdaine wrote: >>> >>> On Thu, Jun 1, 2017 at 6:08 PM, Peter LeFanu Lumsdaine < >>> p.l.lu...@gmail.com> wrote: >>> >>>> On Thu, Jun 1, 2017 at 6:56 PM, Steve Awodey wrote: >>>> > >>>> > you mean the propositional truncation or suspension operations might >>>> lead to cardinals outside of a Grothendieck Universe? >>>> >>>> Exactly, yes. There’s no reason I know of to think they *need* to, but >>>> with the construction of Mike’s and my paper, they do. And adding stronger >>>> conditions on the cardinal used won’t help. The problem is that one takes >>>> a fibrant replacement to go from the “pre-suspension” to the suspension >>>> (more precisely: a (TC,F) factorisation, to go from the universal family of >>>> pre-suspensions to the universal family of suspensions); and fibrant >>>> replacement blows up the fibers to be the size of the *base* of the >>>> family. So the pre-suspension is small, but the suspension — although >>>> essentially small — ends up as large as the universe one’s using. >>>> >>> >>> I realise I was a bit unclear here: it’s only suspension that I meant to >>> suggest is problematic, not propositional truncation. The latter seems a >>> bit easier to do by ad hoc constructions; e.g. the construction below does >>> it in simplicial sets, and I think a similar thing may work also in cubical >>> sets. (I don’t claim originality for this construction; I don’t think I >>> learned it from anywhere, but I do recall discussing it with people who >>> were already aware of it or something similar (I think at least Mike, >>> Thierry, and Simon Huber, at various times?), so I think multiple people >>> may have noticed it independently.) >>> >>> So suspension (or more generally pushouts/coequalisers) is what would >>> make a really good test case for any proposed general approach — it’s the >>> simplest HIT which as far as I know hasn’t been modelled without a size >>> blowup in any infinite-dimensional model except cubical sets, under any of >>> the approaches to modelling HIT’s proposed so far. (Am I right in >>> remembering that this has been given for cubical sets? I can’t find it in >>> any of the writeups, but I seem to recall hearing it presented at >>> conferences.) >>> >>> Construction of propositional truncation without size blowup in >>> simplicial sets: >>> >>> (1) Given a fibration Y —> X, define |Y| —> X as follows: >>> >>> an element of |Y|_n consists of an n-simplex x : Δ[n] —> X, together >>> with a “partial lift of x into Y, defined at least on all vertices”, i.e. a >>> subpresheaf S ≤ Δ[n] containing all vertices, and a map y : S —> Y such >>> that the evident square commutes; >>> >>> reindexing acts by taking pullbacks/inverse images of the domain of the >>> partial lift (i.e. the usual composition of a partial map with a total map). >>> >>> (2) There’s an evident map Y —> |Y| over X; and the operation sending Y >>> to Y —> |Y| —> X is (coherently) stable up to isomorphism under pullback in >>> X. (Straightforward.) >>> >>> (3) In general, a fibration is a proposition in the type-theoretic sense >>> iff it’s orthogonal to the boundary inclusions δ[n] —> Δ[n] for all n > 0. >>> (Non-trivial but not too hard to check.) >>> >>> (4) The map |Y| —> X is a fibration, and a proposition. >>> (Straightforward, given (3), by concretely constructing the required >>> liftings.) >>> >>> (5) The evident map Y —> |Y| over X is a cell complex constructed from >>> boundary inclusions δ[n] —> Δ[n] with n > 0. >>> >>> To see this: take the filtration of |Y| by subobjects Y_n, where the >>> non-degenerate simplices of Y_n are those whose “missing” simplices are all >>> of dimension ≤n. Then Y_0 = Y, and the non-degenerate simplices of Y_{n+1} >>> that are not in Y_n are all {n+1}-cells with boundary in Y_n, so the >>> inclusion Y_n —> Y_{n+1} may be seen as gluing on many copies of δ[n+1] —> >>> Δ[n+1]. >>> >>> (6) The map Y —> |Y| is orthogonal to all propositional fibrations, >>> stably in X. (Orthogonality is immediate from (3) and (5); stability is >>> then by (2).) >>> >>> (7) Let V be either the universe of “well-ordered α-small fibrations”, >>> or the universe of “displayed α-small fibrations”, for α any infinite >>> regular cardinal. Then V carries an operation representing the >>> construction of (1), and modelling propositional truncation. (Lengthy to >>> spell out in full, but straightforward given (2), (6).) >>> >>> >>> –p. >>> >>> >>> >>> >> > -- > You received this message because you are subscribed to the Google Groups > "Homotopy Type Theory" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to HomotopyTypeTheory+unsubscribe@googlegroups.com . > For more options, visit https://groups.google.com/d/optout. > > > -- You received this message because you are subscribed to the Google Groups "Homotopy Type Theory" group. To unsubscribe from this group and stop receiving emails from it, send an email to HomotopyTypeTheory+unsubscribe@googlegroups.com. For more options, visit https://groups.google.com/d/optout.