From mboxrd@z Thu Jan 1 00:00:00 1970 Message-Id: <105369E7-027D-4253-93A2-8364084923CA@gmail.com> From: Patrick Kelly To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net> In-Reply-To: Content-Type: text/plain; charset=us-ascii; format=flowed; delsp=yes Content-Transfer-Encoding: 7bit Mime-Version: 1.0 (iPod Mail 7D11) Date: Wed, 20 Jan 2010 16:41:54 -0500 References: Subject: Re: [9fans] dataflow programming from shell interpreter Topicbox-Message-UUID: c1d4ed7a-ead5-11e9-9d60-3106f5b1d025 On Jan 20, 2010, at 4:13 PM, Eris Discordia wrote: > Aren't DirectShow filter graphs and programs like GraphStudio/ > GraphEdit one possible answer to the video processing question? > Filter graphs can be generated by any program, GUI or CLI, and fed > to DirectShow provided one learns the in and out of generating them. > > The OP's question, too, finds one answer in MS PowerShell where > instead of byte streams .NET objects are passed between various > tools and a C#-like shell language is used for manipulating > them. .NET objects can at any point be serialized/deserialized to/ > from XML using stock classes and routines in > System.Xml.Serialization namespace. Why XML? Surely there are better options. > > Just a note that at least some implementations of both ideas exist > in production settings. > > > --On Tuesday, January 19, 2010 15:40 +0000 Steve Simon > wrote: > >>> The PBM utilities (now net pbm) did something similar for bitmaps. >>> I think V10 also had some pipeline utils for manipulating images. >> >> Indeed, however I make a firsm distinction between image >> proccessing (2d) >> and video processing (3d). >> >> In Video processing the image sequences can be of arbitary length, >> the >> processing is often across several fields, and, because we want our >> results ASAP tools should present the minimum delay possible (e.g. a >> gain control only needs a one pixel buffer). >> >> Aditionally image processing pipelines often have nasty things like >> feedback loops and mixing different paths with differing delays >> which all >> need special care. >> >> We have a package of good old unix tools developed jointly by us >> and the >> BBC which works as you might expect >> >> cat video-stream | interpolate -x 0.7 -y 0.3 | rpnc - 0.5 '*' | >> display >> >> however this can get quite ugly when the algorithm gets complex. >> >> We need to cache intermediate results - processing HD (let alone 2k >> 3d) >> can get time consuming so we want an environment which tee's off >> intermediate results automagicially and uses them if possible - >> sort of >> mk(1) combined with rc(1). >> >> It is also a pain that its not easy to work at different scales i.e. >> writing expressions to operate at the pixel level and using large >> blocks >> like interpolate, the rpnc is an attempt to do this but its >> interpreted >> (slow). >> >> a restricted rc(1)-like language which supports pipelines, >> and scalar (configuration) variables combined with a JIT compiler >> (in the vein of popi) looks like a solution but I have never go >> further >> than wishful thinking. >> >> -Steve >> > > > > >