From mboxrd@z Thu Jan 1 00:00:00 1970 Date: Tue, 20 Oct 2009 03:34:56 +1100 From: Sam Watkins To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net> Message-ID: <20091019163456.GF13857@nipl.net> References: <<20091019155738.GB13857@nipl.net>> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.13 (2006-08-11) Subject: Re: [9fans] Barrelfish Topicbox-Message-UUID: 8b75feb8-ead5-11e9-9d60-3106f5b1d025 On Mon, Oct 19, 2009 at 12:05:19PM -0400, erik quanstrom wrote: > > Details of the calculation: 7200 seconds * 30fps * 12*16 (50*50 pixel > > chunks) * 500000 elementary arithmetic/logical operations in a pipeline > > (unrolled). 7200*30*12*16*500000 = 20 trillion (20,000,000,000,000) > > processing units. This is only a very rough estimate and does not consider > > all the issues. > > could you do a similar calcuation for the memory bandwidth required to > deliver said instructions to the processors? The "processors" (actually smaller processing units) would mostly be configured at load time, much like an FPGA. Most units would execute a single simple operation repeatedly on streams of data, they would not read instructions and execute them sequentially like a normal CPU. The data would travel through the system step by step, it would mostly not need to be stored in RAM. If some RAM was needed, it would be small amounts on chip, at appropriate places in the pipeline. Some programs (not so much video encoding I think) do need a lot of RAM for intermediate calculations, or IO for example to fetch stuff from a database. Such systems can also be designed as networks of simple processing units connected by data streams / pipelines. Sam