From mboxrd@z Thu Jan 1 00:00:00 1970 X-Msuck: nntp://news.gmane.org/gmane.linux.lib.musl.general/10878 Path: news.gmane.org!.POSTED!not-for-mail From: Rich Felker Newsgroups: gmane.linux.lib.musl.general Subject: Re: Reviving planned ldso changes Date: Wed, 4 Jan 2017 01:22:03 -0500 Message-ID: <20170104062203.GN1555@brightrain.aerifal.cx> References: <20170103054351.GA8459@brightrain.aerifal.cx> <20170104060640.GM1555@brightrain.aerifal.cx> Reply-To: musl@lists.openwall.com NNTP-Posting-Host: blaine.gmane.org Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-Trace: blaine.gmane.org 1483510941 21852 195.159.176.226 (4 Jan 2017 06:22:21 GMT) X-Complaints-To: usenet@blaine.gmane.org NNTP-Posting-Date: Wed, 4 Jan 2017 06:22:21 +0000 (UTC) User-Agent: Mutt/1.5.21 (2010-09-15) To: musl@lists.openwall.com Original-X-From: musl-return-10891-gllmg-musl=m.gmane.org@lists.openwall.com Wed Jan 04 07:22:17 2017 Return-path: Envelope-to: gllmg-musl@m.gmane.org Original-Received: from mother.openwall.net ([195.42.179.200]) by blaine.gmane.org with smtp (Exim 4.84_2) (envelope-from ) id 1cOexi-0004zf-Oo for gllmg-musl@m.gmane.org; Wed, 04 Jan 2017 07:22:14 +0100 Original-Received: (qmail 11898 invoked by uid 550); 4 Jan 2017 06:22:17 -0000 Mailing-List: contact musl-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Original-Received: (qmail 11877 invoked from network); 4 Jan 2017 06:22:16 -0000 Content-Disposition: inline In-Reply-To: <20170104060640.GM1555@brightrain.aerifal.cx> Original-Sender: Rich Felker Xref: news.gmane.org gmane.linux.lib.musl.general:10878 Archived-At: On Wed, Jan 04, 2017 at 01:06:40AM -0500, Rich Felker wrote: > diff --git a/ldso/dynlink.c b/ldso/dynlink.c > index c689084..cb82b2c 100644 > --- a/ldso/dynlink.c > +++ b/ldso/dynlink.c > @@ -67,6 +67,7 @@ struct dso { > char constructed; > char kernel_mapped; > struct dso **deps, *needed_by; > + size_t next_dep; > char *rpath_orig, *rpath; > struct tls_module tls; > size_t tls_id; > @@ -1211,13 +1212,14 @@ void __libc_exit_fini() > static void do_init_fini(struct dso *p) > { > size_t dyn[DYN_CNT]; > - int need_locking = libc.threads_minus_1; > - /* Allow recursive calls that arise when a library calls > - * dlopen from one of its constructors, but block any > - * other threads until all ctors have finished. */ > - if (need_locking) pthread_mutex_lock(&init_fini_lock); > - for (; p; p=p->prev) { > - if (p->constructed) continue; > + pthread_mutex_lock(&init_fini_lock); > + while (!p->constructed) { > + while (p->deps[p->next_dep] && p->deps[p->next_dep]->constructed) > + p->next_dep++; > + if (p->deps[p->next_dep]) { > + p = p->deps[p->next_dep++]; > + continue; > + } I think this logic is probably broken in the case of circular dependencies, and will end up skipping ctors for some deps due to the increment in this line: + p = p->deps[p->next_dep++]; which happens before the new p's ctors have actually run. Omitting the increment here, however, would turn it into an infinite loop. We need some way to detect this case and let the dso with an indirect dependency on itself run its ctors once all non-circular deps have been satisfied. I don't now the right algorithm for this right off, though; suggestions would be welcome. Rich