From mboxrd@z Thu Jan 1 00:00:00 1970 X-Msuck: nntp://news.gmane.org/gmane.linux.lib.musl.general/3328 Path: news.gmane.org!not-for-mail From: Rich Felker Newsgroups: gmane.linux.lib.musl.general Subject: Re: cpuset/affinity interfaces and TSX lock elision in musl Date: Fri, 17 May 2013 13:29:03 -0400 Message-ID: <20130517172902.GC20323@brightrain.aerifal.cx> References: <20130516203658.GW20323@brightrain.aerifal.cx> <20130517112802.GA6699@port70.net> Reply-To: musl@lists.openwall.com NNTP-Posting-Host: plane.gmane.org Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-Trace: ger.gmane.org 1368811756 26134 80.91.229.3 (17 May 2013 17:29:16 GMT) X-Complaints-To: usenet@ger.gmane.org NNTP-Posting-Date: Fri, 17 May 2013 17:29:16 +0000 (UTC) To: musl@lists.openwall.com Original-X-From: musl-return-3332-gllmg-musl=m.gmane.org@lists.openwall.com Fri May 17 19:29:18 2013 Return-path: Envelope-to: gllmg-musl@plane.gmane.org Original-Received: from mother.openwall.net ([195.42.179.200]) by plane.gmane.org with smtp (Exim 4.69) (envelope-from ) id 1UdOSz-0004MN-3R for gllmg-musl@plane.gmane.org; Fri, 17 May 2013 19:29:17 +0200 Original-Received: (qmail 31825 invoked by uid 550); 17 May 2013 17:29:16 -0000 Mailing-List: contact musl-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: Original-Received: (qmail 31817 invoked from network); 17 May 2013 17:29:16 -0000 Content-Disposition: inline In-Reply-To: <20130517112802.GA6699@port70.net> User-Agent: Mutt/1.5.21 (2010-09-15) Xref: news.gmane.org gmane.linux.lib.musl.general:3328 Archived-At: On Fri, May 17, 2013 at 01:28:02PM +0200, Szabolcs Nagy wrote: > * Daniel Cegie?ka [2013-05-17 09:41:18 +0200]: > > >> 2) The upcoming glibc will have support for TSX lock elision. > > >> > > >> http://en.wikipedia.org/wiki/Transactional_Synchronization_Extensions > > >> > > >> http://lwn.net/Articles/534761/ > > >> > > >> Are there any outlook that we can support TSX lock elision in musl? > > > > > > I was involved in the discussions about lock elision on the glibc > > > mailing list, and from what I could gather, it's a pain to implement > > > and whether it brings you any benefit is questionable. > > > > There is currently no hardware support, so the tests were done in the > > emulator. It's too early to say there's is no performance gain. I agree it's too early. That's why I said I'd like to wait and see before doing anything. My view is that what glibc is doing is (1) an experiment to see if it's worthwhile, and (2) a buzzword-compliance gimmick whereby Linux vendors and Intel can show off that they have a state-of-the-art new feature (regardless of whether it's useful). > it's not the lock performance that's questionable > but the benefits Yes. An artificial benchmark to spam lock requests would not be that interesting, and for real-world usage, it's a lot more questionable whether lock elision would help or hurt. The canonical case where it would hurt is: 1. Take lock 2. Do expensive computation 3. Output results via syscall 4. Release lock In this case, the expensive computation gets performed twice. It may be possible to avoid all of the costly cases by adaptively turning off elision for particular locks (or of course by having the application manually tune it, but that's hideous), with corresponding complexity costs. Unless the _gains_ in the good cases are sufficiently beneficial, however, I think that complexity would be misspent. In some sense, perhaps a better place for lock elision would be at the _compiler_ level. If the compiler could analyze the code and determine that there is an unconditional path from the lock to the corresponding unlock with no intervening external calls (think: adding or removing item from a linked list), it could add a code path that uses lock elision rather than locking. However this seems to require intricate cooperation between the compiler and library implementation, which is unacceptable to me... > locks should not be the bottleneck in applications > unless there is too much shared state on hot paths, > which is probably a design bug or a special use-case > for which non-standard synchronization methods may > be better anyway One place where there is unfortunately a huge amount of shared state is memory management; this is inevitable. Even if we don't use lock elision for pthread locks, it might be worth considering using it _internally_ in malloc when it's available. It's hard to say without any measurements, but this might result in a malloc that beats ptmalloc, etc. without any thread-locale management. Rich