From mboxrd@z Thu Jan 1 00:00:00 1970 X-Msuck: nntp://news.gmane.org/gmane.linux.lib.musl.general/3020 Path: news.gmane.org!not-for-mail From: Rich Felker Newsgroups: gmane.linux.lib.musl.general Subject: Re: pthread_getattr_np Date: Sun, 31 Mar 2013 14:07:17 -0400 Message-ID: <20130331180717.GI20323@brightrain.aerifal.cx> References: <20130331173518.GH30576@port70.net> Reply-To: musl@lists.openwall.com NNTP-Posting-Host: plane.gmane.org Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-Trace: ger.gmane.org 1364753248 14128 80.91.229.3 (31 Mar 2013 18:07:28 GMT) X-Complaints-To: usenet@ger.gmane.org NNTP-Posting-Date: Sun, 31 Mar 2013 18:07:28 +0000 (UTC) To: musl@lists.openwall.com Original-X-From: musl-return-3021-gllmg-musl=m.gmane.org@lists.openwall.com Sun Mar 31 20:07:56 2013 Return-path: Envelope-to: gllmg-musl@plane.gmane.org Original-Received: from mother.openwall.net ([195.42.179.200]) by plane.gmane.org with smtp (Exim 4.69) (envelope-from ) id 1UMMfb-00039w-0y for gllmg-musl@plane.gmane.org; Sun, 31 Mar 2013 20:07:55 +0200 Original-Received: (qmail 21659 invoked by uid 550); 31 Mar 2013 18:07:29 -0000 Mailing-List: contact musl-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: Original-Received: (qmail 21642 invoked from network); 31 Mar 2013 18:07:29 -0000 Content-Disposition: inline In-Reply-To: <20130331173518.GH30576@port70.net> User-Agent: Mutt/1.5.21 (2010-09-15) Xref: news.gmane.org gmane.linux.lib.musl.general:3020 Archived-At: On Sun, Mar 31, 2013 at 07:35:19PM +0200, Szabolcs Nagy wrote: > pthread_getattr_np is used by some libs (gc, sanitizer) to get the > beginning of the stack of a thread > > it is a gnu extension but bsds have similar non-portable functions > (none of them are properly documented) > > glibc: pthread_getattr_np > freebsd: pthread_attr_get_np > netbsd: pthread_attr_get_np and pthread_getattr_np > openbsd: pthread_stackseg_np > osx: pthread_get_stackaddr_np, pthread_get_stacksize_np > solaris: thr_stksegment > hp-ux: _pthread_stack_info_np > > (glibc and freebsd use locks to synchronize the reading of > thread attributes, may matter for detach state) > (glibc and openbsd try to get correct info for the main > stack others don't) > > returning reasonable result for the main stack is not trivial > (the stack starts at some unspecified address and can grow downward > until the stack rlimit or some already mmapped region is hit) > > possible ways to get the top of the main thread stack: > > 1) /proc/self/maps (fopen,scanf,.. this is precise, glibc does this) > > 2) save the top address of the stack at libc entry somewhere > (eg by keeping a pointer to the original environ which is high > up the stack) this is a good approximation but underestimates > top by a few pages > > 3) the previous approach can be tweaked: the real top is close > so the next few pages can be checked if they are mapped (eg with > madvise) and we declare the address of the first unmapped page > as the top (usually this gives precise result, but when the pages > above the stack are mapped it can overestimate top by a few pages) > > then the stack is [top-rlimit,top] > (the low end should be tweaked when it overlaps with existing > mappings, the current sp is below it or rlimit is unlimited) Getting the high address (or "top" as you've called it) is trivial; your efforts to find the end of the last page that's part of the "stack mapping" are unnecessary. Any address that's past the address of any automatic variable in the main thread, but such that all pages between are valid, is a valid choice for the upper-limit address. The hard part is getting the lower-limit. The rlimit is not a valid way to measure this. For example, rlimit could be unlimited, or the stack might have already grown large before the rlimit was reduced. In practice, it seems like GC applications only care about the start (upper limit) of the stack, not the other end; they use the current stack pointer for the other limit. We could probe the current stack pointer of the target thread by freezing it (with the synccall magic), but this seems like it might be excessively costly for no practical benefit... Rich