caml-list - the Caml user's mailing list
 help / color / mirror / Atom feed
From: Nils Becker <nils.becker@bioquant.uni-heidelberg.de>
To: caml-list@inria.fr
Subject: [Caml-list] react performance tips?
Date: Thu, 13 Nov 2014 18:13:32 +0100	[thread overview]
Message-ID: <5464E6BC.60604@bioquant.uni-heidelberg.de> (raw)
In-Reply-To: <20141112110013.E26627FAE2@sympa.inria.fr>

hi,

tl;dr: i'm looking for feedback on how to optimize performance in a
react-based system. minimal self contained code (160 lines) is here:
https://gist.github.com/nilsbecker/2ec8d0a136e2fcc20184#file-react_minibenchmark-ml


i've been experimenting on and off with React for an event-driven
simulation. at the moment i cannot seem to make it similarly fast as a
conventional imperative scheme using mutable state.

the gist contains a boiled-down simplified version of the tight loop
of the simulation. in each update step, a randomly chosen one out of a
pool of updaters fires. the only effect is to increment a global
counter. (this is trivial but in the actual code the update is still
very cheap to compute)


i implemented this in different ways:

1. "pure". the updaters are (int -> int) events. they are combined
into a single event using E.select. the global counter is a signal
derived from that using S.accum.

2. "side-effect". the global count is an int signal. the updaters are
(unit -> unit) events which as a side effect increment the global
count. they don't need to be combined since the side effects happen
also without.

3. "mutable". react is not used. the updaters are (unit -> unit)
functions which mutate a global int ref.


i like version 1 best since it makes the dependencies explicit.
version 2 is almost like imperative programming. the benefit is that
one can still use react to conventiently add downstream signals, e.g.
to write out the simulation state at regular intervals etc. version 3
does not have this advantage.

when profiling, i found:

- 1. is a bit slower than 2. for small numbers of updaters but really
  slow for many. it seems that E.select is the culprit, causing linear
  scaling

- 2. has runtime independent of the number of updaters

- most of the time is spent in gc and creating new weak arrays.

- 3. is at least 10 times faster than 1. or 2.

- when trivial intermediate events are built into 1. this slows down
  execution. there is a non-negligible cost for traversal of
  dependency chains.


so my questions are:

am i using react in the proper way?

am i doing something stupid that slows it down? what would be a better
setup?

is there a way to combine events that does not suffer from O(n)
performance of a list fold as E.select does?

or is this just the fastest a react-based scheme can run, ie. 10x
slower than imperative updates?

i'm still new. any comments on the code in general are welcome of
course.

thanks a lot for any hints.



           reply	other threads:[~2014-11-13 17:14 UTC|newest]

Thread overview: expand[flat|nested]  mbox.gz  Atom feed
 [parent not found: <20141112110013.E26627FAE2@sympa.inria.fr>]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5464E6BC.60604@bioquant.uni-heidelberg.de \
    --to=nils.becker@bioquant.uni-heidelberg.de \
    --cc=caml-list@inria.fr \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).