From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Original-To: caml-list@yquem.inria.fr Delivered-To: caml-list@yquem.inria.fr Received: from mail2-relais-roc.national.inria.fr (mail2-relais-roc.national.inria.fr [192.134.164.83]) by yquem.inria.fr (Postfix) with ESMTP id 6650BBBAF for ; Tue, 22 Dec 2009 14:21:50 +0100 (CET) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AmMCABJWMEvU4xEIkWdsb2JhbACBSoIkl0QBAQEBCQsKBxMDqgOPYgKBLYIuVgSBZQ X-IronPort-AV: E=Sophos;i="4.47,436,1257116400"; d="scan'208";a="39178723" Received: from moutng.kundenserver.de ([212.227.17.8]) by mail2-smtp-roc.national.inria.fr with ESMTP; 22 Dec 2009 14:21:45 +0100 Received: from office1.lan.sumadev.de (dslb-084-059-066-249.pools.arcor-ip.net [84.59.66.249]) by mrelayeu.kundenserver.de (node=mrbap2) with ESMTP (Nemesis) id 0MbK2G-1NfhSm1lEi-00IrHF; Tue, 22 Dec 2009 14:21:44 +0100 Received: from [192.168.5.104] (dslb-084-059-066-249.pools.arcor-ip.net [84.59.66.249]) by office1.lan.sumadev.de (Postfix) with ESMTPSA id 0F065C0E8E; Tue, 22 Dec 2009 14:21:44 +0100 (CET) Subject: Re: [Caml-list] Re: OCaml is broken From: Gerd Stolpmann To: Erik Rigtorp Cc: yminsky , caml-list In-Reply-To: References: <4B2D2BC1.6020204@msu.edu> <200912200443.57698.jon@ffconsultancy.com> <891bd3390912200547i67c3852dv1c91900018fdea9b@mail.gmail.com> Content-Type: text/plain; charset=UTF-8 Date: Tue, 22 Dec 2009 14:27:26 +0100 Message-Id: <1261488446.31160.5.camel@flake.lan.gerd-stolpmann.de> Mime-Version: 1.0 X-Mailer: Evolution 2.12.1 Content-Transfer-Encoding: quoted-printable X-Provags-ID: V01U2FsdGVkX198vUewqFC55eMFS+h157wqCj6SYa2TBgIS681 BBgknHE9b9eNSOPGbbz2k2KHhiEnu6cf6xKzBvidkVDYjhuvMB qeWx65q1qqiH93i0dJnCw== X-Spam: no; 0.00; ocaml:01 gerd:01 stolpmann:01 gerd:01 sockets:01 loopback:01 scheduler:01 stolpmann:01 darmstadt:01 6151:01 6151:01 dienstag:98 2009:98 50,:98 shm:98 Am Dienstag, den 22.12.2009, 13:04 +0100 schrieb Erik Rigtorp: > On Mon, Dec 21, 2009 at 23:50, Erik Rigtorp wrote: > > Some IPC Benchmarks, Solaris 10 on a quad core Intel Core2 Duo. The > > benchmarks are running on a cpuset with 1 core. I measure the time > > from sending in one process until the other process receives the > > message. So a context switch and the message passing is included in > > the measurements. > > > > Max/Min/Avg > > * Pipes: 28205/5973/6259 > > * Unix domain sockets: 44256/7748/8153 > > * SYSv message queues: 19197/5895/6173 > > * Posix message queues: 37399/10965/11303 > > * TCP on loopback: 29017/7471/7885 > > > > So the latency is roughly 10=C2=B5s for all these solutions. That lat= ency > > is pretty high and would be several times the processing time of the > > message itself. >=20 > Some more benchmarks: >=20 > Max/Min/Avg > * Spinlocking shm: 50897/403/761 (This one utilizes multiple cores, > since one core is just burning while waiting for data) > * Pthreads mutex shm: 27582/5246/6577 >=20 > Forgot to say that all measurements are in nanoseconds. That's for communication between processes, right? How would the picture be different (especially comparing the latter two) if you do message passing between threads? If I remember correctly, threads are more light-weight in Solaris than processes. That could also affect context switching times, and scheduler decisions. Do you have source code? I could also run in on Linux, for comparison. Gerd --=20 ------------------------------------------------------------ Gerd Stolpmann, Bad Nauheimer Str.3, 64289 Darmstadt,Germany=20 gerd@gerd-stolpmann.de http://www.gerd-stolpmann.de Phone: +49-6151-153855 Fax: +49-6151-997714 ------------------------------------------------------------