caml-list - the Caml user's mailing list
 help / color / mirror / Atom feed
From: Goswin von Brederlow <goswin-v-b@web.de>
To: Hugo Ferreira <hmf@inescporto.pt>
Cc: caml-list@yquem.inria.fr
Subject: Re: [Caml-list] Shared data space: How to manage Ancient keys?
Date: Fri, 19 Mar 2010 11:12:45 +0100	[thread overview]
Message-ID: <87hboc4o2q.fsf@frosties.localdomain> (raw)
In-Reply-To: <4BA271D2.9030602@inescporto.pt> (Hugo Ferreira's message of "Thu, 18 Mar 2010 18:32:50 +0000")

Hugo Ferreira <hmf@inescporto.pt> writes:

> Hello,
>
> I am trying to implement a parallel algorithm and have opted to use
> the Ancient module to share data across processes because it dynamically
> reallocates memory when mapping data (I don't know the size of the
> objects before hand). However I am having trouble managing its use of
> keys.
>
> Maybe someone here has a solution to my problem.
>
> To make things clear - a little background. I have a set of worker
> processes that cooperate to analyse a set of sequences. I assume
> all sequences are in a global blackboard-type /LINDA-like data space.
> Each worker removes a single sequence. It then scans all other existing
> sequences and selects another one. It then a) removes the selected
> sequence b) merges the 2 sequences at hand and generates 0, 1 or more
> new sequences. The process repeats itself until no more sequences exist
> in the global data space.
>
> In the above scenario all sequences would be assigned a unique id which
> increases sequentially. If I used a common data structure such as a
> binary tree I could simply increment a counter and use that as a key
> for the insertion/deletion of sequences into/from the binary tree.
> However I cannot do this here because I can only share linear memory
> via mapped files in Ocaml.
>
> So far I have made a small experiment that allows me to insert data
> into a shared memory space. Ancient allows one to copy various objects
> into the memory space via an integer index. From the code it seems that
> the indexing table is an array like structure. So when I add and object
> I must identify an index (slot) that is empty (unused) and reuse that
> otherwise memory will grow unchecked.
>
> My question is how can I generate and share keys for Ancient's use
> without exceeding memory usage?
>
> TIA,
> Hugo F.

How about a simple solution. Only one thread handles indexes.

You said you have worker processes. Add one controling process on top of
it that spawns all the worker processes and keeps a communication line
open with them, e.g. a pipe. Each worker would then tell the controller
to remove an index or request an unused index to store a result. Its
probably a good idea to request unused indexes in bunches, say 100 at a
time to cut down on latenzy.

MfG
        Goswin


  reply	other threads:[~2010-03-19 10:12 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-03-18 18:32 Hugo Ferreira
2010-03-19 10:12 ` Goswin von Brederlow [this message]
2010-03-19 11:07   ` [Caml-list] " Hugo Ferreira

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87hboc4o2q.fsf@frosties.localdomain \
    --to=goswin-v-b@web.de \
    --cc=caml-list@yquem.inria.fr \
    --cc=hmf@inescporto.pt \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).