From mboxrd@z Thu Jan 1 00:00:00 1970 X-Msuck: nntp://news.gmane.org/gmane.linux.lib.musl.general/12753 Path: news.gmane.org!.POSTED!not-for-mail From: Rich Felker Newsgroups: gmane.linux.lib.musl.general Subject: Re: What's wrong with musl's malloc Date: Sat, 21 Apr 2018 01:52:36 -0400 Message-ID: <20180421055236.GV3094@brightrain.aerifal.cx> References: <20180420200904.GS3094@brightrain.aerifal.cx> <20180421052812.GU3094@brightrain.aerifal.cx> Reply-To: musl@lists.openwall.com NNTP-Posting-Host: blaine.gmane.org Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-Trace: blaine.gmane.org 1524289845 21997 195.159.176.226 (21 Apr 2018 05:50:45 GMT) X-Complaints-To: usenet@blaine.gmane.org NNTP-Posting-Date: Sat, 21 Apr 2018 05:50:45 +0000 (UTC) User-Agent: Mutt/1.5.21 (2010-09-15) To: musl@lists.openwall.com Original-X-From: musl-return-12769-gllmg-musl=m.gmane.org@lists.openwall.com Sat Apr 21 07:50:41 2018 Return-path: Envelope-to: gllmg-musl@m.gmane.org Original-Received: from mother.openwall.net ([195.42.179.200]) by blaine.gmane.org with smtp (Exim 4.84_2) (envelope-from ) id 1f9lQ0-0005cW-ED for gllmg-musl@m.gmane.org; Sat, 21 Apr 2018 07:50:40 +0200 Original-Received: (qmail 20379 invoked by uid 550); 21 Apr 2018 05:52:48 -0000 Mailing-List: contact musl-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Original-Received: (qmail 20361 invoked from network); 21 Apr 2018 05:52:48 -0000 Content-Disposition: inline In-Reply-To: <20180421052812.GU3094@brightrain.aerifal.cx> Original-Sender: Rich Felker Xref: news.gmane.org gmane.linux.lib.musl.general:12753 Archived-At: On Sat, Apr 21, 2018 at 01:28:12AM -0400, Rich Felker wrote: > One protocol that might work: > > When locking multiple bins, they must be locked in decreasing order. > > For malloc, this is the natural thing to do -- the bin you return the > excess to will be the same or smaller than the bin you allocate from. > > For free, take the freelock from the beginning, rather than just > momentarily at the end. Then you know the bin sizes you'll be working ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > with (up to at most 3 -- self, prev, and next) and can lock them in ^^^^^^^^^^^^^^^^^^^^^ I don't think this is actually quite correct. A concurrent malloc could consume part of either of the adjacent free chunks. It's okay if it consumes the whole thing, but if it only consumes part, you'll end up with a new adjacent free chunk of a different size (different bin). Of course loop-and-retry is an option but doesn't seem like a good one; the retry time could be unbounded. Not sure if this is salvagable or not. Problems like this keep pointing in the direction of wanting to lock chunks (via their header/footer) rather than locking just bins. On the plus side it's much finer-grained locking and MT-friendly. Unfortunately it's also more complex and might have more fixed overhead. Rich