From mboxrd@z Thu Jan 1 00:00:00 1970 X-Msuck: nntp://news.gmane.org/gmane.linux.lib.musl.general/11569 Path: news.gmane.org!.POSTED!not-for-mail From: Jens Gustedt Newsgroups: gmane.linux.lib.musl.general Subject: [PATCH 8/8] implement the local lock for conditions with __lock & Co Date: Thu, 22 Jun 2017 23:42:18 +0200 Message-ID: <49684dbabde96810776f7c4f282d44da8fd9a947.1498228733.git.Jens.Gustedt@inria.fr> References: Reply-To: musl@lists.openwall.com NNTP-Posting-Host: blaine.gmane.org X-Trace: blaine.gmane.org 1498229382 6545 195.159.176.226 (23 Jun 2017 14:49:42 GMT) X-Complaints-To: usenet@blaine.gmane.org NNTP-Posting-Date: Fri, 23 Jun 2017 14:49:42 +0000 (UTC) To: musl@lists.openwall.com Original-X-From: musl-return-11582-gllmg-musl=m.gmane.org@lists.openwall.com Fri Jun 23 16:49:39 2017 Return-path: Envelope-to: gllmg-musl@m.gmane.org Original-Received: from mother.openwall.net ([195.42.179.200]) by blaine.gmane.org with smtp (Exim 4.84_2) (envelope-from ) id 1dOPtw-0001NZ-GV for gllmg-musl@m.gmane.org; Fri, 23 Jun 2017 16:49:36 +0200 Original-Received: (qmail 32164 invoked by uid 550); 23 Jun 2017 14:49:39 -0000 Mailing-List: contact musl-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Original-Received: (qmail 32128 invoked from network); 23 Jun 2017 14:49:38 -0000 X-IronPort-AV: E=Sophos;i="5.39,378,1493676000"; d="scan'208";a="280372340" In-Reply-To: Resent-Date: Fri, 23 Jun 2017 16:49:26 +0200 Resent-From: Jens Gustedt Resent-Message-ID: <20170623164926.7e6ce252@inria.fr> Resent-To: musl@lists.openwall.com Xref: news.gmane.org gmane.linux.lib.musl.general:11569 Archived-At: Condition variables (pthread and C11) had a specialized lock implementation that already had been designed such that no access to the variable is made after an unlock. Here this was not done because a variable might be freed but because the implementation uses variables on the thread's stack to chain the access. The previous implementation only held two different internal states, one when there is no congestion and one when there is. The latter is then used to wake up other threads if necessary. The new lock structure deals with such situation more adequately, because it keeps track of the exact congestion and allows us to react appropriately. --- src/thread/pthread_cond_timedwait.c | 48 ++++++++++++------------------------- 1 file changed, 15 insertions(+), 33 deletions(-) diff --git a/src/thread/pthread_cond_timedwait.c b/src/thread/pthread_cond_timedwait.c index 6d92ea88..873fac2c 100644 --- a/src/thread/pthread_cond_timedwait.c +++ b/src/thread/pthread_cond_timedwait.c @@ -1,5 +1,11 @@ #include "pthread_impl.h" +#if defined(__GNUC__) && defined(__PIC__) +#define inline inline __attribute__((always_inline)) +#endif + +#include "__lock.h" + void __pthread_testcancel(void); int __pthread_mutex_lock(pthread_mutex_t *); int __pthread_mutex_unlock(pthread_mutex_t *); @@ -33,30 +39,6 @@ struct waiter { volatile int *notify; }; -/* Self-synchronized-destruction-safe lock functions */ - -static inline void lock(volatile int *l) -{ - if (a_cas(l, 0, 1)) { - a_cas(l, 1, 2); - do __wait(l, 0, 2, 1); - while (a_cas(l, 0, 2)); - } -} - -static inline void unlock(volatile int *l) -{ - if (a_swap(l, 0)==2) - __wake(l, 1, 1); -} - -static inline void unlock_requeue(volatile int *l, volatile int *r, int w) -{ - a_store(l, 0); - if (w) __wake(l, 1, 1); - else __syscall(SYS_futex, l, FUTEX_REQUEUE|__futex_private, 0, 1, r); -} - enum { WAITING, SIGNALED, @@ -83,7 +65,7 @@ int __pthread_cond_timedwait(pthread_cond_t *restrict c, pthread_mutex_t *restri seq = c->_c_seq; a_inc(&c->_c_waiters); } else { - lock(&c->_c_lock); + __lock_fast(&c->_c_lock); seq = node.barrier = 2; fut = &node.barrier; @@ -93,7 +75,7 @@ int __pthread_cond_timedwait(pthread_cond_t *restrict c, pthread_mutex_t *restri if (!c->_c_tail) c->_c_tail = &node; else node.next->prev = &node; - unlock(&c->_c_lock); + __unlock_fast(&c->_c_lock); } __pthread_mutex_unlock(m); @@ -124,14 +106,14 @@ int __pthread_cond_timedwait(pthread_cond_t *restrict c, pthread_mutex_t *restri * after seeing a LEAVING waiter without getting notified * via the futex notify below. */ - lock(&c->_c_lock); + __lock_fast(&c->_c_lock); if (c->_c_head == &node) c->_c_head = node.next; else if (node.prev) node.prev->next = node.next; if (c->_c_tail == &node) c->_c_tail = node.prev; else if (node.next) node.next->prev = node.prev; - unlock(&c->_c_lock); + __unlock_fast(&c->_c_lock); if (node.notify) { if (a_fetch_add(node.notify, -1)==1) @@ -139,7 +121,7 @@ int __pthread_cond_timedwait(pthread_cond_t *restrict c, pthread_mutex_t *restri } } else { /* Lock barrier first to control wake order. */ - lock(&node.barrier); + __lock_fast(&node.barrier); } relock: @@ -155,7 +137,7 @@ relock: /* Unlock the barrier that's holding back the next waiter, and * either wake it or requeue it to the mutex. */ if (node.prev) - unlock_requeue(&node.prev->barrier, &m->_m_lock, m->_m_type & 128); + __unlock_requeue_fast(&node.prev->barrier, &m->_m_lock, m->_m_type & 128); else a_dec(&m->_m_waiters); @@ -179,7 +161,7 @@ int __private_cond_signal(pthread_cond_t *c, int n) volatile int ref = 0; int cur; - lock(&c->_c_lock); + __lock_fast(&c->_c_lock); for (p=c->_c_tail; n && p; p=p->prev) { if (a_cas(&p->state, WAITING, SIGNALED) != WAITING) { ref++; @@ -197,7 +179,7 @@ int __private_cond_signal(pthread_cond_t *c, int n) c->_c_head = 0; } c->_c_tail = p; - unlock(&c->_c_lock); + __unlock_fast(&c->_c_lock); /* Wait for any waiters in the LEAVING state to remove * themselves from the list before returning or allowing @@ -205,7 +187,7 @@ int __private_cond_signal(pthread_cond_t *c, int n) while ((cur = ref)) __wait(&ref, 0, cur, 1); /* Allow first signaled waiter, if any, to proceed. */ - if (first) unlock(&first->barrier); + if (first) __unlock_fast(&first->barrier); return 0; }