From mboxrd@z Thu Jan 1 00:00:00 1970 X-Msuck: nntp://news.gmane.org/gmane.linux.lib.musl.general/12312 Path: news.gmane.org!.POSTED!not-for-mail From: Jens Gustedt Newsgroups: gmane.linux.lib.musl.general Subject: [PATCH 4/7] separate the fast parts of __lock and __unlock into a .h file that may be used by other TU Date: Wed, 3 Jan 2018 14:17:12 +0100 Message-ID: <41048892c9aa47a6ebde97cd90f244981221a437.1514985618.git.Jens.Gustedt@inria.fr> References: Reply-To: musl@lists.openwall.com NNTP-Posting-Host: blaine.gmane.org X-Trace: blaine.gmane.org 1514987191 21339 195.159.176.226 (3 Jan 2018 13:46:31 GMT) X-Complaints-To: usenet@blaine.gmane.org NNTP-Posting-Date: Wed, 3 Jan 2018 13:46:31 +0000 (UTC) To: musl@lists.openwall.com Original-X-From: musl-return-12329-gllmg-musl=m.gmane.org@lists.openwall.com Wed Jan 03 14:46:26 2018 Return-path: Envelope-to: gllmg-musl@m.gmane.org Original-Received: from mother.openwall.net ([195.42.179.200]) by blaine.gmane.org with smtp (Exim 4.84_2) (envelope-from ) id 1eWjNB-00057b-Lp for gllmg-musl@m.gmane.org; Wed, 03 Jan 2018 14:46:25 +0100 Original-Received: (qmail 7173 invoked by uid 550); 3 Jan 2018 13:48:15 -0000 Mailing-List: contact musl-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Original-Received: (qmail 6035 invoked from network); 3 Jan 2018 13:48:13 -0000 X-IronPort-AV: E=Sophos;i="5.45,501,1508796000"; d="scan'208";a="307522374" In-Reply-To: Resent-Date: Wed, 3 Jan 2018 14:44:42 +0100 Resent-From: Jens Gustedt Resent-Message-ID: <20180103144442.49f1c8a3@inria.fr> Resent-To: musl Xref: news.gmane.org gmane.linux.lib.musl.general:12312 Archived-At: This provides two interfaces __lock_fast and __unlock_fast that are both "static inline" and that should result in a better integration of the lock in place. The slow path of the lock algorithm remains centralized, here adding the overhead of a function call is not a big deal. This must only be used directly by a TU that encapsulates all LOCK and UNLOCK calls of a particular lock object. --- src/internal/__lock.h | 21 +++++++++++++++++++++ src/internal/libc.h | 1 + src/thread/__lock.c | 20 +++++--------------- 3 files changed, 27 insertions(+), 15 deletions(-) create mode 100644 src/internal/__lock.h diff --git a/src/internal/__lock.h b/src/internal/__lock.h new file mode 100644 index 00000000..f16d6176 --- /dev/null +++ b/src/internal/__lock.h @@ -0,0 +1,21 @@ +#include "pthread_impl.h" + +static inline void __lock_fast(volatile int *l) +{ + extern void __lock_slow(volatile int*, int); + if (!libc.threads_minus_1) return; + /* fast path: INT_MIN for the lock, +1 for the congestion */ + int current = a_cas(l, 0, INT_MIN + 1); + if (!current) return; + __lock_slow(l, current); +} + +static inline void __unlock_fast(volatile int *l) +{ + /* Check l[0] to see if we are multi-threaded. */ + if (l[0] < 0) { + if (a_fetch_add(l, -(INT_MIN + 1)) != (INT_MIN + 1)) { + __wake(l, 1, 1); + } + } +} diff --git a/src/internal/libc.h b/src/internal/libc.h index 5e145183..a594d0c5 100644 --- a/src/internal/libc.h +++ b/src/internal/libc.h @@ -47,6 +47,7 @@ extern size_t __sysinfo ATTR_LIBC_VISIBILITY; extern char *__progname, *__progname_full; /* Designed to avoid any overhead in non-threaded processes */ +void __lock_slow(volatile int *, int) ATTR_LIBC_VISIBILITY; void __lock(volatile int *) ATTR_LIBC_VISIBILITY; void __unlock(volatile int *) ATTR_LIBC_VISIBILITY; int __lockfile(FILE *) ATTR_LIBC_VISIBILITY; diff --git a/src/thread/__lock.c b/src/thread/__lock.c index 45557c88..a3d8d4d0 100644 --- a/src/thread/__lock.c +++ b/src/thread/__lock.c @@ -1,4 +1,5 @@ #include "pthread_impl.h" +#include "__lock.h" /* This lock primitive combines a flag (in the sign bit) and a * congestion count (= threads inside the critical section, CS) in a @@ -16,12 +17,11 @@ * with INT_MIN as a lock flag. */ -void __lock(volatile int *l) +weak_alias(__lock_fast, __lock); +weak_alias(__unlock_fast, __unlock); + +void __lock_slow(volatile int *l, int current) { - if (!libc.threads_minus_1) return; - /* fast path: INT_MIN for the lock, +1 for the congestion */ - int current = a_cas(l, 0, INT_MIN + 1); - if (!current) return; /* A first spin loop, for medium congestion. */ for (unsigned i = 0; i < 10; ++i) { if (current < 0) current -= INT_MIN + 1; @@ -48,13 +48,3 @@ void __lock(volatile int *l) current = val; } } - -void __unlock(volatile int *l) -{ - /* Check l[0] to see if we are multi-threaded. */ - if (l[0] < 0) { - if (a_fetch_add(l, -(INT_MIN + 1)) != (INT_MIN + 1)) { - __wake(l, 1, 1); - } - } -} -- 2.15.1