From mboxrd@z Thu Jan 1 00:00:00 1970 X-Msuck: nntp://news.gmane.org/gmane.linux.lib.musl.general/3274 Path: news.gmane.org!not-for-mail From: "Z. Gilboa" Newsgroups: gmane.linux.lib.musl.general Subject: Re: sign (in)consistency between architectures Date: Wed, 1 May 2013 21:39:00 -0400 Message-ID: <5181C3B4.4040801@eservices.virginia.edu> References: <51814B3F.4040005@eservices.virginia.edu> <20130501180015.GN12689@port70.net> <20130501200007.GM20323@brightrain.aerifal.cx> <20130501224132.GN20323@brightrain.aerifal.cx> Reply-To: musl@lists.openwall.com NNTP-Posting-Host: plane.gmane.org Mime-Version: 1.0 Content-Type: multipart/alternative; boundary="------------030305000409050101010802" X-Trace: ger.gmane.org 1367458755 29597 80.91.229.3 (2 May 2013 01:39:15 GMT) X-Complaints-To: usenet@ger.gmane.org NNTP-Posting-Date: Thu, 2 May 2013 01:39:15 +0000 (UTC) To: Original-X-From: musl-return-3278-gllmg-musl=m.gmane.org@lists.openwall.com Thu May 02 03:39:15 2013 Return-path: Envelope-to: gllmg-musl@plane.gmane.org Original-Received: from mother.openwall.net ([195.42.179.200]) by plane.gmane.org with smtp (Exim 4.69) (envelope-from ) id 1UXiUM-00074j-Uq for gllmg-musl@plane.gmane.org; Thu, 02 May 2013 03:39:15 +0200 Original-Received: (qmail 25629 invoked by uid 550); 2 May 2013 01:39:14 -0000 Mailing-List: contact musl-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: Original-Received: (qmail 25618 invoked from network); 2 May 2013 01:39:13 -0000 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130329 Thunderbird/17.0.5 In-Reply-To: <20130501224132.GN20323@brightrain.aerifal.cx> X-Originating-IP: [71.206.170.124] Xref: news.gmane.org gmane.linux.lib.musl.general:3274 Archived-At: --------------030305000409050101010802 Content-Type: text/plain; charset="ISO-8859-1"; format=flowed Content-Transfer-Encoding: 7bit Am 01.05.2013 18:41, schrieb Rich Felker: > On Wed, May 01, 2013 at 04:00:07PM -0400, Rich Felker wrote: >> On Wed, May 01, 2013 at 08:00:15PM +0200, Szabolcs Nagy wrote: >>> * Z. Gilboa [2013-05-01 13:05:03 -0400]: >>>> The current architecture-specific type definitions >>>> (arch/*/bits/alltypes.h) seem to entail the following inconsistent >>>> signed/unsigned types: >>>> >>>> type x86_64 i386 >>>> ------------------------------- >>>> uid_t unsigned signed >>>> gid_t unsigned signed >>>> dev_t unsigned signed >>>> clock_t signed unsigned >>> >>> i can verify that glibc uses unsigned >>> uid_t,gid_t,dev_t and signed clock_t >>> >>> of course applications should not depend on >>> the signedness, but if they appear in a c++ >>> api then the difference can cause problems >>> >>> and cock_t may be used in arithmetics where >>> signedness matters >> uid_t, gid_t, and dev_t we can consider changing; I don't think it >> matters a whole lot and like you said they affect C++ ABI. clock_t >> cannot be changed without making the clock() function unusable. See >> glibc bug #13080 (WONTFIX): >> >> http://sourceware.org/bugzilla/show_bug.cgi?id=13080 > I just posted a followup on this bug: from what I can tell, it's > questionable whether having the return value of clock() wrap is > conforming even if clock_t is an unsigned type, and definitely > non-conforming if it's a signed type. As such, I see three possible > solutions: > > 1. Leave things along and do it the way musl does it now, where > subtracting (unsigned) results works. We should probably add a check > to see if the return value would be equal to (clock_t)-1, and if so, > either add or subtract 1, so that the caller does not interpret the > return value as an error. > > 2. Change clock_t to a signed type, and have clock() check for > overflow and permanently return -1 once the process has used more than > 2147 seconds of cpu time. This seems undesirable to applications. > > 3. Change clock_t to long long on 32-bit targets. This would be > formally incompatible with the the glibc/LSB ABI, but in practice the > worst that would happen is that the register containing the upper bits > would get ignored. > > Any opinions on the issue? > > Rich I consider the difference in sign to be of much greater significance, and therefore would prefer option #3. Besides, with enough patience and perseverance (/der lange Marsch durch die Institutionen.../), this might actually become the glibc solution as well:) --------------030305000409050101010802 Content-Type: text/html; charset="ISO-8859-1" Content-Transfer-Encoding: 7bit
Am 01.05.2013 18:41, schrieb Rich Felker:
On Wed, May 01, 2013 at 04:00:07PM -0400, Rich Felker wrote:
On Wed, May 01, 2013 at 08:00:15PM +0200, Szabolcs Nagy wrote:
* Z. Gilboa <zg7s@eservices.virginia.edu> [2013-05-01 13:05:03 -0400]:
The current architecture-specific type definitions
(arch/*/bits/alltypes.h) seem to entail the following inconsistent
signed/unsigned types:

type      x86_64        i386
-------------------------------
uid_t     unsigned      signed
gid_t     unsigned      signed
dev_t     unsigned      signed
clock_t   signed        unsigned

i can verify that glibc uses unsigned
uid_t,gid_t,dev_t and signed clock_t

of course applications should not depend on
the signedness, but if they appear in a c++
api then the difference can cause problems

and cock_t may be used in arithmetics where
signedness matters
uid_t, gid_t, and dev_t we can consider changing; I don't think it
matters a whole lot and like you said they affect C++ ABI. clock_t
cannot be changed without making the clock() function unusable. See
glibc bug #13080 (WONTFIX):

http://sourceware.org/bugzilla/show_bug.cgi?id=13080
I just posted a followup on this bug: from what I can tell, it's
questionable whether having the return value of clock() wrap is
conforming even if clock_t is an unsigned type, and definitely
non-conforming if it's a signed type. As such, I see three possible
solutions:

1. Leave things along and do it the way musl does it now, where
subtracting (unsigned) results works. We should probably add a check
to see if the return value would be equal to (clock_t)-1, and if so,
either add or subtract 1, so that the caller does not interpret the
return value as an error.

2. Change clock_t to a signed type, and have clock() check for
overflow and permanently return -1 once the process has used more than
2147 seconds of cpu time. This seems undesirable to applications.

3. Change clock_t to long long on 32-bit targets. This would be
formally incompatible with the the glibc/LSB ABI, but in practice the
worst that would happen is that the register containing the upper bits
would get ignored.

Any opinions on the issue?

Rich

I consider the difference in sign to be of much greater significance, and therefore would prefer option #3.  Besides, with enough patience and perseverance (der lange Marsch durch die Institutionen...), this might actually become the glibc solution as well:)
--------------030305000409050101010802--