From mboxrd@z Thu Jan 1 00:00:00 1970 Date: Wed, 4 Oct 2006 08:07:09 +1000 From: Andy Newman To: Fans of the OS Plan 9 from Bell Labs <9fans@cse.psu.edu> Subject: Re: [9fans] gs Message-ID: <200610032207.k93M7BYV014543@haides.silverbrookresearch.com> References: <8fd802a4a0bcc60a27f43346bbccf0c5@plan9.bell-labs.com> <6e35c0620610031348j7bb69409qc12082341f9d23e2@mail.gmail.com> <283f5df10610031437p7d1d35b5qca6c81f18faa2e96@mail.gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <283f5df10610031437p7d1d35b5qca6c81f18faa2e96@mail.gmail.com> User-Agent: Mutt/1.4.2.1i Topicbox-Message-UUID: c4c00afa-ead1-11e9-9d60-3106f5b1d025 LiteStar numnums wrote: > I should think that the real reason that PostScript is so slow is > that the graphics rely upon someone who should have really optimised > some the workings out of their source file... In a previous life I worked on graphics systems similar in nature to Postscript (language + graphics lib). Profiling such things, including older versions of gs, shows a general pattern of around 50% of the time in an average render (for some value of average) is spent writing runs of pixels to the output image. The memory ops hurt a lot. When accelerating these things (i.e. throwing h/w at it) typically the first thing you want to do is have the h/w generate pixel runs from some run-length representation used in the s/w. Increased device spatial and color resolutions just make this more true with the associated increase in memory requirements for images. Modern CPU/memory characteristics may have altered that 50% figure, it's been a while since I measured it, but I don't see it being vastly different (famous last words :). The nature of image rendering, and RIPs, place high demands on memory systems and it is easy to NOT get good rendering throughput solely due to memory related issues. Also the language interpreter itself can be a bottleneck if not optimized for handling large inputs. The programs these interpreters generally process are very large but relatively simple - few loops, some subroutines but lots and lots (many millions if not tens of millions) of lexemes to read.