From mboxrd@z Thu Jan 1 00:00:00 1970 X-Msuck: nntp://news.gmane.org/gmane.linux.lib.musl.general/12748 Path: news.gmane.org!.POSTED!not-for-mail From: Rich Felker Newsgroups: gmane.linux.lib.musl.general Subject: Re: [PATCH 2/2] arm: enable a_ll and a_sc helper functions when building for ARMv6T2 Date: Thu, 19 Apr 2018 15:25:19 -0400 Message-ID: <20180419192519.GN3094@brightrain.aerifal.cx> References: <1524102704-8973-1-git-send-email-armccurdy@gmail.com> <1524102704-8973-3-git-send-email-armccurdy@gmail.com> <20180419163851.GL3094@brightrain.aerifal.cx> Reply-To: musl@lists.openwall.com NNTP-Posting-Host: blaine.gmane.org Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-Trace: blaine.gmane.org 1524165809 11477 195.159.176.226 (19 Apr 2018 19:23:29 GMT) X-Complaints-To: usenet@blaine.gmane.org NNTP-Posting-Date: Thu, 19 Apr 2018 19:23:29 +0000 (UTC) User-Agent: Mutt/1.5.21 (2010-09-15) To: musl@lists.openwall.com Original-X-From: musl-return-12764-gllmg-musl=m.gmane.org@lists.openwall.com Thu Apr 19 21:23:25 2018 Return-path: Envelope-to: gllmg-musl@m.gmane.org Original-Received: from mother.openwall.net ([195.42.179.200]) by blaine.gmane.org with smtp (Exim 4.84_2) (envelope-from ) id 1f9F9Q-0002vA-1M for gllmg-musl@m.gmane.org; Thu, 19 Apr 2018 21:23:24 +0200 Original-Received: (qmail 32258 invoked by uid 550); 19 Apr 2018 19:25:32 -0000 Mailing-List: contact musl-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Original-Received: (qmail 32240 invoked from network); 19 Apr 2018 19:25:31 -0000 Content-Disposition: inline In-Reply-To: Original-Sender: Rich Felker Xref: news.gmane.org gmane.linux.lib.musl.general:12748 Archived-At: On Thu, Apr 19, 2018 at 12:14:51PM -0700, Andre McCurdy wrote: > On Thu, Apr 19, 2018 at 9:38 AM, Rich Felker wrote: > > On Wed, Apr 18, 2018 at 06:51:44PM -0700, Andre McCurdy wrote: > >> ARMv6 cores with support for Thumb2 can take advantage of the "ldrex" > >> and "strex" based implementations of a_ll and a_sc. > >> --- > >> arch/arm/atomic_arch.h | 2 +- > >> 1 file changed, 1 insertion(+), 1 deletion(-) > >> > >> diff --git a/arch/arm/atomic_arch.h b/arch/arm/atomic_arch.h > >> index 5ff1be1..62458b4 100644 > >> --- a/arch/arm/atomic_arch.h > >> +++ b/arch/arm/atomic_arch.h > >> @@ -8,7 +8,7 @@ extern uintptr_t __attribute__((__visibility__("hidden"))) > >> __a_cas_ptr, __a_barrier_ptr; > >> > >> #if ((__ARM_ARCH_6__ || __ARM_ARCH_6K__ || __ARM_ARCH_6KZ__ || __ARM_ARCH_6ZK__) && !__thumb__) \ > >> - || __ARM_ARCH_7A__ || __ARM_ARCH_7R__ || __ARM_ARCH >= 7 > >> + || __ARM_ARCH_6T2__ || __ARM_ARCH_7A__ || __ARM_ARCH_7R__ || __ARM_ARCH >= 7 > >> > >> #define a_ll a_ll > >> static inline int a_ll(volatile int *p) > > > > I'm merging this along with the others, but there is some concern that > > our use of a_ll/a_sc might not actually be valid on most or all of the > > archs we currently use it on. Depending on how this turns out it might > > all be removed at some later time. > > That sound ominous. What's the concern? Originally ARM didn't document it, but reportedly it's now documented somewhere that the ll and sc operations have certain strong conditions on how they're used. RISC-V's and maybe other archs' also have similar conditions. They're something along the lines of (from my memory): - no intervening loads or stores between ll and sc - limit on number of instructions between ll and sc - no jumps or branches between ll and sc and there is no way to guarantee these kinds of conditions when the compiler is free to move the ll and sc asm blocks independently. >From a practical standpoint, it looks like the conditions are overly conservative and designed to allow cpu implementations with very bad cache designs (direct-mapped, small, etc.) but they may turn out to be relevant so we need to evaluate if we need to care about this... Rich