From mboxrd@z Thu Jan 1 00:00:00 1970 To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net> In-reply-to: Your message of "Thu, 16 Apr 2009 18:24:36 BST." References: <20090406035529.C9F5A5B21@mail.bitblocks.com> From: Bakul Shah Date: Thu, 16 Apr 2009 11:52:59 -0700 Message-Id: <20090416185300.207AF5B1B@mail.bitblocks.com> Subject: Re: [9fans] typed sh (was: what features would you like in a shell?) Topicbox-Message-UUID: de4e6ae0-ead4-11e9-9d60-3106f5b1d025 On Thu, 16 Apr 2009 18:24:36 BST roger peppe wrote: > 2009/4/6 Bakul Shah : > > On Thu, 02 Apr 2009 20:28:57 BST roger peppe =C2=A0w= > rote: > >> a pipeline is an amazingly powerful thing considering > >> that it's not a turing-complete abstraction. > > > > "f | g" is basically function composition, where f and g are > > stream functions. Of course, this simple analogy breaks down > > the moment we add more input/output channels -- may be that > > is why anything beyond a simple pipeline seems to get people > > in trouble (see the rc output redirection thread). > > actually, the analogy works fine if we add more > input channels - it's multiple output channels > that make things hard, as they mean that you have > an arbitrary directed graph rather than a tree, which doesn't > have such a convenient textual representation > and is harder to comprehend to boot. True in general but certain graphs are relatively easy to comprehend depending on what you are doing (trees, hub & spokes, rings). Shells don't provide you a convenient mechanism for constructing these graphs (I'd use macros in Scheme/Lisp, or a graphics editor). For DAGs you can use something like the example below but it doesn't have the nice aesthetics of a pipeline! let s0,s1 = function-with-two-output-streams function-with-two-input-streams(f0(s0), f1(s1), ...) > > To go beyond simple char streams, one can for example build a > > s-expr pipeline: a stream of self identifying objects of a > > few types (chars, numbers, symbols, lists, vectors). > > the difficulty with s-exprs (and most nested structures, e.g. XML) > from a pipeline point of view is > that their nested nature means that any branch might contain unlimited > quantities > of stuff, so you can't always process in O(1) space, which is one of the > things i really like about pipeline processing. You can have arbitrarily long lines in a text file so if you operate on lines, you need arbitrary buffer space. It is the same problem. Also note that I was talking about a stream of s-exprs, not one s-expr as a stream (which makes no sense). For example, (attach ...) (walk ...) (open ...) (read ...) (clunk ...) > i found a nice counter-example in the fs stuff - the fundamental type > was based around a "conditional-push" protocol for sending trees > of files - the sender sends some information on a file/directory > and the receiver replies whether to descend into that file or > not. the tree had a canonical order (alphabetical on name), so > tree merging could be done straightforwardly in O(1) space. > > this kind of streaming "feels" like a regular pipeline, but you can't > do this with a regular pipeline. for instance, a later element in the > pipeline can prevent an earlier from descending into a part > of the file system that might block indefinitely. > > every language has a trade-off between typed and untyped representations; > with alphabet i was trying to create something where it was *possible* > to create new kinds of types where necessary (as in the fs example), > but where it wouldn't be customary or necessary to do so in the > vast majority of cases. > > perhaps it was folly, but i still think it was an interesting experiment, > and i don't know of anything similar.