From mboxrd@z Thu Jan 1 00:00:00 1970 Received: (from majordomo@localhost) by pauillac.inria.fr (8.7.6/8.7.3) id PAA17427; Tue, 6 Aug 2002 15:18:17 +0200 (MET DST) X-Authentication-Warning: pauillac.inria.fr: majordomo set sender to owner-caml-list@pauillac.inria.fr using -f Received: from nez-perce.inria.fr (nez-perce.inria.fr [192.93.2.78]) by pauillac.inria.fr (8.7.6/8.7.3) with ESMTP id PAA17518 for ; Tue, 6 Aug 2002 15:18:16 +0200 (MET DST) Received: from fort-point-station.mit.edu (FORT-POINT-STATION.MIT.EDU [18.7.7.76]) by nez-perce.inria.fr (8.11.1/8.11.1) with ESMTP id g76DIE104848 for ; Tue, 6 Aug 2002 15:18:15 +0200 (MET DST) Received: from grand-central-station.mit.edu (GRAND-CENTRAL-STATION.MIT.EDU [18.7.21.82]) by fort-point-station.mit.edu (8.9.2/8.9.2) with ESMTP id JAA20264; Tue, 6 Aug 2002 09:18:11 -0400 (EDT) Received: from melbourne-city-street.mit.edu (MELBOURNE-CITY-STREET.MIT.EDU [18.7.21.86]) by grand-central-station.mit.edu (8.9.2/8.9.2) with ESMTP id JAA10503; Tue, 6 Aug 2002 09:18:10 -0400 (EDT) Received: from localhost.localdomain (MLIN.MIT.EDU [18.194.0.95]) by melbourne-city-street.mit.edu (8.9.2/8.9.2) with ESMTP id JAA02681; Tue, 6 Aug 2002 09:18:10 -0400 (EDT) Subject: Re: [Caml-list] ocaml-3.05: a performance experience From: Mike Lin To: John Max Skaller Cc: caml-list@inria.fr In-Reply-To: <3D4F40DA.5040006@ozemail.com.au> References: <3D49FD72.68388864@quasar.ipa.nw.ru> <20020803123311.GA631@ice.gerd-stolpmann.de> <3D4C965D.775F23DD@quasar.ipa.nw.ru> <20020804204532.GA9405@ice.gerd-stolpmann.de> <3D4E972D.6000706@ozemail.com.au> <021e01c23c9c$86d0c9b0$240f2744@cc1003186f> <3D4F40DA.5040006@ozemail.com.au> Content-Type: text/plain Content-Transfer-Encoding: 7bit X-Mailer: Ximian Evolution 1.0.5 Date: 06 Aug 2002 09:24:01 -0400 Message-Id: <1028640242.11692.161.camel@mlin> Mime-Version: 1.0 Sender: owner-caml-list@pauillac.inria.fr Precedence: bulk > Blocking IO using OS resources is very slow -- O(log N) is good. > User space dispatching is typically very fast: O(1) [using a hashtable] > > So if you're parsing XML being read over the internet, > the push technology is superior -- if you can bypass > the high level socket interface (which is non-trivial :-( I'm still not sure I understand this. Pull parsers can be structured so that they don't require blocking I/O. That was the point of all the CPS stuff in Yaxpo, and what you call "control inversion" (which I'm pretty sure is also CPS) in Felix. The controlling system can control the I/O however it wants, but the interface to the parser still looks like pull. Do you consider 'select' and 'read' part of the "high level socket interface" that should be bypassed? (I just want to clarify because OCaml has even higher level interfaces) > Absolutely! That is why Felix exists. To allow one to > write 'pull' code, which does blocking reads of data, but have the > code translated to 'push' code, where a thread is resumed by a > dispatcher when the data is available. > > For some applications like telephony, where the number > of connections is rather large, event driven code is the ONLY option. > Unix OS are typically incapable of handling millions of threads, a user > space dispatcher has no trouble at all, even on a small Linux PC. > The reason is: client level code 'knows' the encoding of messages > and can sort out where to send it much faster than any generic > OS facilities can: OS schedulers are designed to run arbitrary > mixtures of programs, not millions of threads of the same application. It would be wise not to eschew the OS scheduler entirely. There are a number of limitations to this type of purely event-driven model under heavy load. In particular, it assumes that a "thread" never blocks, but in fact it may block in many cases that are not under the control of your scheduler - page faults, garbage collection, and so forth. In such cases you can squeeze some extra processing time out of the system by having several OS threads. If you haven't already, you should have a look at Matt Welsh's work with SEDA (http://www.cs.berkeley.edu/~mdw/proj/sandstorm/). I'm not sure if it already does or not, but if it doesn't, it would be very interesting to have a way to build custom event queue managers into Felix's scheduler. -Mike ------------------- To unsubscribe, mail caml-list-request@inria.fr Archives: http://caml.inria.fr Bug reports: http://caml.inria.fr/bin/caml-bugs FAQ: http://caml.inria.fr/FAQ/ Beginner's list: http://groups.yahoo.com/group/ocaml_beginners