caml-list - the Caml user's mailing list
 help / color / mirror / Atom feed
From: "Török Edwin" <edwin+ml-ocaml@etorok.net>
To: caml-list@inria.fr
Subject: Re: [Caml-list] OpenGL and LWT
Date: Mon, 17 Mar 2014 12:34:10 +0200	[thread overview]
Message-ID: <5326CFA2.909@etorok.net> (raw)
In-Reply-To: <CANhEzE4k5unUmDbT+1GsC8BQnt7LPt2QSdM=_joa6d6BKdETUA@mail.gmail.com>

On 03/17/2014 12:09 PM, Jeremie Dimino wrote:
> Hi,
> 
> On Sat, Mar 15, 2014 at 5:37 PM, Richard Neswold <rich.neswold@gmail.com <mailto:rich.neswold@gmail.com>> wrote:
> 
>     Hello,
> 
>     What would be the appropriate way to use LWT and LabGL together?
> 
>     My gut feeling would be to have the core model and logic to be LWT with a 'detached' thread executing the OpenGL main loop. The thread would get events through the reactive module and would update the display accordingly.
> 
>     The problem is I'm still trying to wrap my head around LWT and I'm not sure a detached thread can participate with the LWT environment it was spawned from. It appears the detached thread is supposed to do something and eventually return a value.
> 
> 
> Yes, the purpose of Lwt_preemptive.detach is to execute a blocking action in a separate system thread in order not to block all lwt threads. And from another system thread you cannot run lwt code directly, you have to use Lwt_preemptive.run_in_main.

Be careful to have an OpenGL context set as current on the thread(s) from which you make GL calls. See:
https://www.opengl.org/wiki/OpenGL_and_multithreading
http://www.equalizergraphics.com/documentation/parallelOpenGLFAQ.html

I don't think you'll get a performance improvement from launching multiple GL calls in different (Lwt) threads.

>  
> 
>     Is the correct approach to use a detached thread to make the individual OpenGL calls?
> 
> 
> I think that would be very slow, as in involves system thread synchronization. 
>  
> 
>     Any recommendations or resources on the subject would be appreciated.
> 
> 
> You could try to assume  that OpenGL calls are fast and perform them in the same thread as lwt.

Well there are a few that are slow, like swapping buffers which can wait for vsync.

I think its best to make GL calls from just *one* thread, either your application's main thread, or a dedicated rendering thread.
You can probably build a Lwt.t that waits for the rendering thread to finish one frame using
Lwt_unix.make_notification/send_notification for thread-safe Lwt notification, coupled with Lwt.wait/Lwt.wakeup_result.

Best regards,
--Edwin


  parent reply	other threads:[~2014-03-17 10:34 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-03-15 17:37 Richard Neswold
2014-03-17 10:09 ` Jeremie Dimino
2014-03-17 10:32   ` Ivan Gotovchits
2014-03-17 13:39     ` Rich Neswold
2014-03-17 14:05       ` Daniel Bünzli
2014-03-17 14:13         ` Daniel Bünzli
2014-03-17 22:50         ` Richard Neswold
2014-03-17 23:50           ` Daniel Bünzli
2014-03-19 12:01             ` Rich Neswold
2014-03-17 10:34   ` Török Edwin [this message]
2014-03-17 11:01     ` Daniel Bünzli

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5326CFA2.909@etorok.net \
    --to=edwin+ml-ocaml@etorok.net \
    --cc=caml-list@inria.fr \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).