From mboxrd@z Thu Jan 1 00:00:00 1970 MIME-Version: 1.0 In-Reply-To: <20090904144110.10C345B4B@mail.bitblocks.com> References: <3e1162e60909032118h60620d2cj74791672e5f55a5f@mail.gmail.com> <0084d6ddb9d674fa38925596dabc8d78@quanstro.net> <3e1162e60909032231h5a2cc329x89744a497052e551@mail.gmail.com> <3e1162e60909032235s6e5a67dau6109f30246f255d@mail.gmail.com> <20090904071109.11C405B18@mail.bitblocks.com> <3e1162e60909040047q2f3451e4k2880720beb2a1373@mail.gmail.com> <20090904144110.10C345B4B@mail.bitblocks.com> Date: Fri, 4 Sep 2009 08:04:40 -0700 Message-ID: <3e1162e60909040804y67b4f85en589bb5410f11a7cc@mail.gmail.com> From: David Leimbach To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net> Content-Type: multipart/alternative; boundary=000e0cd59c96edd64a0472c1d1a2 Subject: Re: [9fans] "Blocks" in C Topicbox-Message-UUID: 64c799e8-ead5-11e9-9d60-3106f5b1d025 --000e0cd59c96edd64a0472c1d1a2 Content-Type: text/plain; charset=ISO-8859-1 On Fri, Sep 4, 2009 at 7:41 AM, Bakul Shah > wrote: > On Fri, 04 Sep 2009 00:47:18 PDT David Leimbach > wrote: > > On Fri, Sep 4, 2009 at 12:11 AM, Bakul Shah > > < > bakul%2Bplan9@bitblocks.com > > > > wrote: > > > > > But this has no more to do with parallelism than any other > > > feature of C. If you used __block vars in a block, you'd > > > still need to lock them when the block is called from > > > different threads. > > > > > I just wrote a prime sieve with terrible shutdown synchronization you can > > look at here: > > > > http://paste.lisp.org/display/86549 > > Not sure how your program invalidates what I said. Blocks do > provide more syntactic sugar but that "benefit" is independent > of GCD (grand central dispatch) or what have you. Given that > __block vars are shared, I don't see how you can avoid locking > if blocks get used in parallel. > You've said it yourself. "if blocks get used in parallel". If the blocks are scheduled to the same non-concurrent queue, there shouldn't be a problem, unless you've got blocks scheduled and running on multiple serial queues. There are 3 concurrent queues, each with different priorities in GCD, and you can't create any more concurrent queues to the best of my knowledge, the rest are serial queues, and they schedule blocks in FIFO order. Given that you can arrange your code such that no two blocks sharing the same state can execute at the same time now, why would you lock it? What I did was allocate context data for the actual read end of a pipe fd on the heap, such that when an associated block was launched by the runtime (when something was written to the write fd of a pipe) it would get it's context pointer in it's block struct, which I could access by get_context. I should note that for some reason my code falls apart in terms of actually working as I expected it after MAX is set to something over 700, so I'm probably *still* not doing something correctly, or I did something Apple didn't expect. Dave --000e0cd59c96edd64a0472c1d1a2 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable

On Fri, Sep 4, 2009 at 7:41 AM, Bakul Sh= ah <bak= ul+plan9@bitblocks.com> wrote:
On Fri, 04 Sep 2009 00:47:18 PDT David Leimbach <leimy2k@gmail.com> =A0wrote:
> On Fri, Sep 4, 2009 at 12:11 AM, Bakul Shah
> <bakul+plan9@b= itblocks.com<bakul%= 2Bplan9@bitblocks.com>
> > wrote:
>
> > But this has no more to do with parallelism than any other
> > feature of C. If you used __block vars in a block, you'd
> > still need to lock them when the block is called from
> > different threads.
> >
> I just wrote a prime sieve with terrible shutdown synchronization you = can
> look at here:
>
> http= ://paste.lisp.org/display/86549

Not sure how your program invalidates what I said. =A0Blocks do
provide more syntactic sugar but that "benefit" is independent of GCD (grand central dispatch) or what have you. Given that
__block vars are shared, I don't see how you can avoid locking
if blocks get used in parallel.

You'= ;ve said it yourself. =A0"if blocks get used in parallel". =A0If = the blocks are scheduled to the same non-concurrent queue, there shouldn= 9;t be a problem, unless you've got blocks scheduled and running on mul= tiple serial queues. =A0 There are 3 concurrent queues, each with different= priorities in GCD, and you can't create any more concurrent queues to = the best of my knowledge, the rest are serial queues, and they schedule blo= cks in FIFO order. =A0

Given that you can arrange your code such that no two b= locks sharing the same state can execute at the same time now, why would yo= u lock it? =A0What I did was allocate context data for the actual read end = of a pipe fd on the heap, such that when an associated block was launched b= y the runtime (when something was written to the write fd of a pipe) it wou= ld get it's context pointer in it's block struct, which I could acc= ess by get_context. =A0

I should note that for some reason my code falls apart = in terms of actually working as I expected it after MAX is set to something= over 700, so I'm probably *still* not doing something correctly, or I = did something Apple didn't expect.

Dave
=A0

--000e0cd59c96edd64a0472c1d1a2--