From mboxrd@z Thu Jan 1 00:00:00 1970 X-Msuck: nntp://news.gmane.org/gmane.linux.lib.musl.general/12316 Path: news.gmane.org!.POSTED!not-for-mail From: Jens Gustedt Newsgroups: gmane.linux.lib.musl.general Subject: [PATCH 7/7] implement the local lock for condition variables with the new lock feature Date: Wed, 3 Jan 2018 14:17:12 +0100 Message-ID: <39a0bab7db1f77ff5ccb8d0fe91a80aa87119113.1514985618.git.Jens.Gustedt@inria.fr> References: Reply-To: musl@lists.openwall.com NNTP-Posting-Host: blaine.gmane.org X-Trace: blaine.gmane.org 1514987213 26350 195.159.176.226 (3 Jan 2018 13:46:53 GMT) X-Complaints-To: usenet@blaine.gmane.org NNTP-Posting-Date: Wed, 3 Jan 2018 13:46:53 +0000 (UTC) To: musl@lists.openwall.com Original-X-From: musl-return-12332-gllmg-musl=m.gmane.org@lists.openwall.com Wed Jan 03 14:46:49 2018 Return-path: Envelope-to: gllmg-musl@m.gmane.org Original-Received: from mother.openwall.net ([195.42.179.200]) by blaine.gmane.org with smtp (Exim 4.84_2) (envelope-from ) id 1eWjNQ-000675-Ue for gllmg-musl@m.gmane.org; Wed, 03 Jan 2018 14:46:41 +0100 Original-Received: (qmail 7316 invoked by uid 550); 3 Jan 2018 13:48:17 -0000 Mailing-List: contact musl-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Original-Received: (qmail 6101 invoked from network); 3 Jan 2018 13:48:14 -0000 X-IronPort-AV: E=Sophos;i="5.45,501,1508796000"; d="scan'208";a="307522382" In-Reply-To: Resent-Date: Wed, 3 Jan 2018 14:45:50 +0100 Resent-From: Jens Gustedt Resent-Message-ID: <20180103144550.62e9341b@inria.fr> Resent-To: musl Xref: news.gmane.org gmane.linux.lib.musl.general:12316 Archived-At: Condition variables (pthread and C11) had a specialized lock implementation that already had been designed such that no access to the variable is made after an unlock. Here this is mandatory because the implementation uses variables on the thread's stack to chain the access. The previous implementation only held two different internal states, one when there is no congestion and one when there is. The latter is then used to wake up other threads if necessary. The new lock structure deals with such situation more adequately, because it keeps track of the exact congestion and allows us to react appropriately. --- src/thread/pthread_cond_timedwait.c | 53 ++++++++++++------------------------- 1 file changed, 17 insertions(+), 36 deletions(-) diff --git a/src/thread/pthread_cond_timedwait.c b/src/thread/pthread_cond_timedwait.c index ed8569c2..659e9b5a 100644 --- a/src/thread/pthread_cond_timedwait.c +++ b/src/thread/pthread_cond_timedwait.c @@ -1,5 +1,11 @@ #include "pthread_impl.h" +#if defined(__GNUC__) && defined(__PIC__) +#define inline inline __attribute__((always_inline)) +#endif + +#include "__lock.h" + void __pthread_testcancel(void); int __pthread_mutex_lock(pthread_mutex_t *); int __pthread_mutex_unlock(pthread_mutex_t *); @@ -33,31 +39,6 @@ struct waiter { volatile int *notify; }; -/* Self-synchronized-destruction-safe lock functions */ - -static inline void lock(volatile int *l) -{ - if (a_cas(l, 0, 1)) { - a_cas(l, 1, 2); - do __wait(l, 0, 2, 1); - while (a_cas(l, 0, 2)); - } -} - -static inline void unlock(volatile int *l) -{ - if (a_swap(l, 0)==2) - __wake(l, 1, 1); -} - -static inline void unlock_requeue(volatile int *l, volatile int *r, int w) -{ - a_store(l, 0); - if (w) __wake(l, 1, 1); - else __syscall(SYS_futex, l, FUTEX_REQUEUE|FUTEX_PRIVATE, 0, 1, r) != -ENOSYS - || __syscall(SYS_futex, l, FUTEX_REQUEUE, 0, 1, r); -} - enum { WAITING, SIGNALED, @@ -84,7 +65,7 @@ int __pthread_cond_timedwait(pthread_cond_t *restrict c, pthread_mutex_t *restri seq = c->_c_seq; a_inc(&c->_c_waiters); } else { - lock(&c->_c_lock); + __lock_fast(&c->_c_lock); seq = node.barrier = 2; fut = &node.barrier; @@ -94,7 +75,7 @@ int __pthread_cond_timedwait(pthread_cond_t *restrict c, pthread_mutex_t *restri if (!c->_c_tail) c->_c_tail = &node; else node.next->prev = &node; - unlock(&c->_c_lock); + __unlock_fast(&c->_c_lock); } __pthread_mutex_unlock(m); @@ -125,14 +106,14 @@ int __pthread_cond_timedwait(pthread_cond_t *restrict c, pthread_mutex_t *restri * after seeing a LEAVING waiter without getting notified * via the futex notify below. */ - lock(&c->_c_lock); - + __lock_fast(&c->_c_lock); + if (c->_c_head == &node) c->_c_head = node.next; else if (node.prev) node.prev->next = node.next; if (c->_c_tail == &node) c->_c_tail = node.prev; else if (node.next) node.next->prev = node.prev; - - unlock(&c->_c_lock); + + __unlock_fast(&c->_c_lock); if (node.notify) { if (a_fetch_add(node.notify, -1)==1) @@ -140,7 +121,7 @@ int __pthread_cond_timedwait(pthread_cond_t *restrict c, pthread_mutex_t *restri } } else { /* Lock barrier first to control wake order. */ - lock(&node.barrier); + __lock_fast(&node.barrier); } relock: @@ -156,7 +137,7 @@ relock: /* Unlock the barrier that's holding back the next waiter, and * either wake it or requeue it to the mutex. */ if (node.prev) - unlock_requeue(&node.prev->barrier, &m->_m_lock, m->_m_type & 128); + __unlock_requeue_fast(&node.prev->barrier, &m->_m_lock, m->_m_type & 128); else a_dec(&m->_m_waiters); @@ -180,7 +161,7 @@ int __private_cond_signal(pthread_cond_t *c, int n) volatile int ref = 0; int cur; - lock(&c->_c_lock); + __lock_fast(&c->_c_lock); for (p=c->_c_tail; n && p; p=p->prev) { if (a_cas(&p->state, WAITING, SIGNALED) != WAITING) { ref++; @@ -198,7 +179,7 @@ int __private_cond_signal(pthread_cond_t *c, int n) c->_c_head = 0; } c->_c_tail = p; - unlock(&c->_c_lock); + __unlock_fast(&c->_c_lock); /* Wait for any waiters in the LEAVING state to remove * themselves from the list before returning or allowing @@ -206,7 +187,7 @@ int __private_cond_signal(pthread_cond_t *c, int n) while ((cur = ref)) __wait(&ref, 0, cur, 1); /* Allow first signaled waiter, if any, to proceed. */ - if (first) unlock(&first->barrier); + if (first) __unlock_fast(&first->barrier); return 0; } -- 2.15.1