From mboxrd@z Thu Jan 1 00:00:00 1970 Date: Sat, 17 Oct 2009 19:45:40 +0100 From: Eris Discordia To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net> Message-ID: In-Reply-To: <4030fb6ae37f8ca8ae9c43ceefbdf57b@ladd.quanstro.net> References: <<20091015105328.GA18947@nipl.net>> <4030fb6ae37f8ca8ae9c43ceefbdf57b@ladd.quanstro.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit Content-Disposition: inline Subject: Re: [9fans] Barrelfish Topicbox-Message-UUID: 896fb0b4-ead5-11e9-9d60-3106f5b1d025 >> There is a vast range of applications that cannot >> be managed in real time using existing single-core technology. > > please name one. I'm a tiny fish, this is the ocean. Nevertheless, I venture: there are already Cell-based expansion cards out there for "real-time" H.264/VC-1/MPEG-4 AVC encoding. Meaning, 1080p video in, H.264 stream out, "real-time." I can imagine a large market for this in broadcasting, netcasting, simulcasting industry. Simulcasting in particular is a prime application. Station X in Japan broadcasts a popular animated series in 1080i, while US licensor of the same content simulcasts for subscribers through its web interface. This applies all the more to live feeds. What seems to go ignored here is the class of embarrassingly parallel problems which--while they may or may not be important to CS people, I don't know--appear in many areas of applied computing. I know one person working at an institute of the Max Planck Society who regularly runs a few hundred instances of the same program (doing some sort of matrix calculation for a problem in physics) with different input. He certainly could benefit from a hundred cores inside his desktop computing platform _if_ fitting that many cores in there wouldn't cause latencies larger than the network latencies he currently experiences (at the moment he uses a job manager that controls a cluster). "INB4" criticism, his input matrices are small and his work is compute-intensive rather than memory-intensive. Another embarrassingly parallel problem, as Sam Watkins pointed out, arises in digital audio processing. I might add to his example of applying a filter to sections of one track the example of applying the same or different filters to multiple tracks at once. Multitrack editing was/is a killer application of digital audio. Multitrack video editing, too. I believe video/audio processing software were among the first applications for "workstation"-class desktops that were parallelized. By the way, I learnt about embarrassingly parallel problems from that same Max Planck research fellow who runs embarrassingly parallel matrix calculations. --On Thursday, October 15, 2009 09:27 -0400 erik quanstrom wrote: > On Thu Oct 15 06:55:24 EDT 2009, sam@nipl.net wrote: >> task. With respect to Ken, Bill Gates said something along the lines of >> "who would need more than 640K?". > > on the other hand, there were lots of people using computers with 4mb > of memory when bill gates said this. it was quite easy to see how to use > more than 1mb at the time. in fact, i believe i used an apple ][ around > that time that had ~744k. it was a wierd amount of memory. > >> There is a vast range of applications that cannot >> be managed in real time using existing single-core technology. > > please name one. > > - erik >