mailing list of musl libc
 help / color / mirror / code / Atom feed
* Using PTHREAD_MUTEX_RECURSIVE and PTHREAD_MUTEX_ERRORCHECK leads to segmentation fault
@ 2015-09-03 12:44 Eugene
  2015-09-03 14:15 ` Rich Felker
  0 siblings, 1 reply; 4+ messages in thread
From: Eugene @ 2015-09-03 12:44 UTC (permalink / raw)
  To: musl

Hello,

I have problem with mutexes of type PTHREAD_MUTEX_RECURSIVE and 
PTHREAD_MUTEX_ERRORCHECK.
Using this mutexes sometimes leads to segmentation fault in functions 
__pthread_mutex_trylock_owner() and __pthread_mutex_unlock().
Problem is floating and very bad reproducible with library PJSIP.

Broken places are following.

__pthread_mutex_unlock():
  24                 if (next != &self->robust_list.head) *(volatile 
void *volatile *)
  25                         ((char *)next - sizeof(void *)) = prev;


__pthread_mutex_trylock_owner():
  37         if (next != &self->robust_list.head) *(volatile void 
*volatile *)
  38                 ((char *)next - sizeof(void *)) = &m->_m_next;

GCC: 4.9.2
Version: 1.1.10
Architeture: MIPS (little endian)

Best regards,
Eugene


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Using PTHREAD_MUTEX_RECURSIVE and PTHREAD_MUTEX_ERRORCHECK leads to segmentation fault
  2015-09-03 12:44 Using PTHREAD_MUTEX_RECURSIVE and PTHREAD_MUTEX_ERRORCHECK leads to segmentation fault Eugene
@ 2015-09-03 14:15 ` Rich Felker
  2015-09-04 12:11   ` Eugene
  0 siblings, 1 reply; 4+ messages in thread
From: Rich Felker @ 2015-09-03 14:15 UTC (permalink / raw)
  To: musl

On Thu, Sep 03, 2015 at 03:44:57PM +0300, Eugene wrote:
> Hello,
> 
> I have problem with mutexes of type PTHREAD_MUTEX_RECURSIVE and
> PTHREAD_MUTEX_ERRORCHECK.
> Using this mutexes sometimes leads to segmentation fault in
> functions __pthread_mutex_trylock_owner() and
> __pthread_mutex_unlock().
> Problem is floating and very bad reproducible with library PJSIP.

Do you have a way to reproduce it without an actual SIP
configuration/deployment?

> Broken places are following.
> 
> __pthread_mutex_unlock():
>  24                 if (next != &self->robust_list.head) *(volatile
> void *volatile *)
>  25                         ((char *)next - sizeof(void *)) = prev;
> 
> 
> __pthread_mutex_trylock_owner():
>  37         if (next != &self->robust_list.head) *(volatile void
> *volatile *)
>  38                 ((char *)next - sizeof(void *)) = &m->_m_next;

This is almost surely a bug in the caller but I'd like to look into
it. My guess is that they're destroying or freeing mutexes that are
locked.

Rich


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Using PTHREAD_MUTEX_RECURSIVE and PTHREAD_MUTEX_ERRORCHECK leads to segmentation fault
  2015-09-03 14:15 ` Rich Felker
@ 2015-09-04 12:11   ` Eugene
  2015-09-04 16:05     ` Rich Felker
  0 siblings, 1 reply; 4+ messages in thread
From: Eugene @ 2015-09-04 12:11 UTC (permalink / raw)
  To: musl

Unfortunately I can reproduce it only with real configuration.
I has tried to write small program to reproduce problem but without luck.

I can add some debug or asserts if it helps.

On 03.09.2015 17:15, Rich Felker wrote:
> On Thu, Sep 03, 2015 at 03:44:57PM +0300, Eugene wrote:
>> Hello,
>>
>> I have problem with mutexes of type PTHREAD_MUTEX_RECURSIVE and
>> PTHREAD_MUTEX_ERRORCHECK.
>> Using this mutexes sometimes leads to segmentation fault in
>> functions __pthread_mutex_trylock_owner() and
>> __pthread_mutex_unlock().
>> Problem is floating and very bad reproducible with library PJSIP.
> Do you have a way to reproduce it without an actual SIP
> configuration/deployment?
>
>> Broken places are following.
>>
>> __pthread_mutex_unlock():
>>   24                 if (next != &self->robust_list.head) *(volatile
>> void *volatile *)
>>   25                         ((char *)next - sizeof(void *)) = prev;
>>
>>
>> __pthread_mutex_trylock_owner():
>>   37         if (next != &self->robust_list.head) *(volatile void
>> *volatile *)
>>   38                 ((char *)next - sizeof(void *)) = &m->_m_next;
> This is almost surely a bug in the caller but I'd like to look into
> it. My guess is that they're destroying or freeing mutexes that are
> locked.
>
> Rich



^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Using PTHREAD_MUTEX_RECURSIVE and PTHREAD_MUTEX_ERRORCHECK leads to segmentation fault
  2015-09-04 12:11   ` Eugene
@ 2015-09-04 16:05     ` Rich Felker
  0 siblings, 0 replies; 4+ messages in thread
From: Rich Felker @ 2015-09-04 16:05 UTC (permalink / raw)
  To: musl

On Fri, Sep 04, 2015 at 03:11:23PM +0300, Eugene wrote:
> Unfortunately I can reproduce it only with real configuration.
> I has tried to write small program to reproduce problem but without luck.
> 
> I can add some debug or asserts if it helps.

Logging mutex lock/unlock and destroy/free would show whether a locked
mutex is destroyed or freed, I think.

Another approach would be to add to pthread_mutex_destroy a check for
whether the mutex is locked, and make it crash. This would only catch
invalid destroy though, not free-without-destroy.

Rich


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2015-09-04 16:05 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-09-03 12:44 Using PTHREAD_MUTEX_RECURSIVE and PTHREAD_MUTEX_ERRORCHECK leads to segmentation fault Eugene
2015-09-03 14:15 ` Rich Felker
2015-09-04 12:11   ` Eugene
2015-09-04 16:05     ` Rich Felker

Code repositories for project(s) associated with this public inbox

	https://git.vuxu.org/mirror/musl/

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).