One of those questions for which there is no search engine incantation. Jon
i can’t find my copy of “pixel”, but i would guess he might mention that.
> On Sep 8, 2022, at 9:51 AM, Jon Steinhart <jon@fourwinds.com> wrote:
>
> One of those questions for which there is no search engine incantation.
>
> Jon
[-- Attachment #1: Type: text/plain, Size: 1531 bytes --] On 9/8/22 12:51, Jon Steinhart wrote: > One of those questions for which there is no search engine incantation. > > Jon The famous 1946 paper, "Preliminary discussion of the logical design of an electronic computing device", by Arthur Burks, Herman H. Goldstine, John von Neumann, contains this sentence. I have this paper in Computer Structures: Readings and Examples, by Bell and Newell, but it's also online in many forms ** * 4. The memory organ * 4.1. Ideally one would desire an indefinitely large memory capacity such that any particular aggregate of 40 binary digits, or /word /(cf. 2.3), would be immediately available-i.e. in a time which is somewhat or considerably shorter than the operation time of a fast electronic multiplier. I also looked in the Oxford English Dictionary for etymology. It has: *d.* /Computing/. A consecutive string of bits (now typically 16, 32, or 64, but formerly fewer) that can be transferred and stored as a unit./machine word/: see /machine word/ n. at machine n. Compounds 2 <https://www-oed-com.ezproxy.bpl.org/view/Entry/111850#eid38480019>. 1946 H. H. Goldstine & J. Von Neumann in J. von Neumann /Coll. Wks./ (1963) V. 28 In ‘writing’ a word into the memory, it is similarly not only the time effectively consumed in ‘writing’ which matters, but also the time needed to ‘find’ the specified location in the memory. [plus newer citations] Dan H [-- Attachment #2: Type: text/html, Size: 2983 bytes --]
> It was used, in the modern sense, in "Planning a Computer System", > Buchholz,1962. Also in the IBM "650 Manual of Operation", June, 1955. (Before I was born! :-) Noel
[-- Attachment #1: Type: text/plain, Size: 933 bytes --] See "The Preparation of Programs for an Electronic Digital Computer", by Maurice V. Wilkes, David J. Wheeler, and Stanley Gill, copyright 1951, pp. 5 section 1-4: "The store is divided into a number of registers or storage locations; the content of a storage location is a sequence of 0's and 1's, and may represent an order or a number. The term word is used for the content of a storage location if it is desired to refer to it without specifying whether it represents a number or an order." Jim From: "Noel Chiappa" <jnc@mercury.lcs.mit.edu> To: tuhs@tuhs.org Cc: jnc@mercury.lcs.mit.edu Sent: Thursday, September 8, 2022 2:20:51 PM Subject: [TUHS] Re: Does anybody know the etymology of the term "word" as in collection of bits? > It was used, in the modern sense, in "Planning a Computer System", > Buchholz,1962. Also in the IBM "650 Manual of Operation", June, 1955. (Before I was born! :-) Noel [-- Attachment #2: Type: text/html, Size: 1454 bytes --]
> From: Jim Capp > See "The Preparation of Programs for an Electronic Digital Computer", > by Maurice V. Wilkes, David J. Wheeler, and Stanley Gill Blast! I looked in the index in my copy (ex the Caltech CS Dept Library :-), but didn't find 'word' in the index! Looking a little further, Turing's ACE Report, from 1946, uses the term (section 4, pg. 25; "minor cycle, or word"). My copy, the one edited by Carpenter and Doran, has a note #1 by them, "Turing seems to be the first user of 'word' with this meaning." I have Brian's email, I can ask him how they came to that determination, if you'd like. There aren't many things older than that! I looked quickly through the "First Draft on the EDVAC", 1945 (re-printed in "From ENIAC to UNIVAC", by Stein), but did not see word there. It does use the term "minor cycle", though. Other places worth checking are the IBM/Harvard Mark I, the ENIAC and ... I guess therer's not much else! Oh, there was a relay machine at Bell, too. The Atanasoff-Berry computer? > From: "John P. Linderman" > He claims that if you wanted to do decimal arithmetic on a binary > machine, you'd want to have 10 digits of accuracy to capture the 10 > digit log tables that were then popular. The EDVAC draft talks about needing 8 decimal digits (Appendix A, pg.190); apparently von Neumann knew that that's how many digits one needed for reasonable accuracy in differential equations. That is 27 "binary digits" (apparently 'bit' hadn't been coined yet). Noel
[-- Attachment #1: Type: text/plain, Size: 2388 bytes --] I did send this already, from 1946, did you see it? The famous 1946 paper, "Preliminary discussion of the logical design of an electronic computing device", by Arthur Burks, Herman H. Goldstine, John von Neumann, contains this sentence. I have this paper in Computer Structures: Readings and Examples, by Bell and Newell, but it's also online in many forms 4. The memory organ 4.1. Ideally one would desire an indefinitely large memory capacity such that any particular aggregate of 40 binary digits, or -word- (cf. 2.3), would be immediately available-i.e. in a time which is somewhat or considerably shorter than the operation time of a fast electronic multiplier. [word is in italics] On 9/8/22 17:16, Noel Chiappa wrote: > > From: Jim Capp > > > See "The Preparation of Programs for an Electronic Digital Computer", > > by Maurice V. Wilkes, David J. Wheeler, and Stanley Gill > > Blast! I looked in the index in my copy (ex the Caltech CS Dept Library :-), > but didn't find 'word' in the index! > > Looking a little further, Turing's ACE Report, from 1946, uses the term > (section 4, pg. 25; "minor cycle, or word"). My copy, the one edited by > Carpenter and Doran, has a note #1 by them, "Turing seems to be the first > user of 'word' with this meaning." I have Brian's email, I can ask him how > they came to that determination, if you'd like. > > There aren't many things older than that! I looked quickly through the "First > Draft on the EDVAC", 1945 (re-printed in "From ENIAC to UNIVAC", by Stein), > but did not see word there. It does use the term "minor cycle", though. > > Other places worth checking are the IBM/Harvard Mark I, the ENIAC and ... > I guess therer's not much else! Oh, there was a relay machine at Bell, too. > The Atanasoff-Berry computer? > > > > From: "John P. Linderman" > > > He claims that if you wanted to do decimal arithmetic on a binary > > machine, you'd want to have 10 digits of accuracy to capture the 10 > > digit log tables that were then popular. > > The EDVAC draft talks about needing 8 decimal digits (Appendix A, pg.190); > apparently von Neumann knew that that's how many digits one needed for > reasonable accuracy in differential equations. That is 27 "binary digits" > (apparently 'bit' hadn't been coined yet). > > Noel [-- Attachment #2: Type: text/html, Size: 2810 bytes --]
[-- Attachment #1: Type: text/plain, Size: 2763 bytes --] On Thursday, 8 September 2022 at 13:28:13 -0400, Dan Halbert wrote: > > I also looked in the Oxford English Dictionary for etymology. It has: > > *d.* /Computing/. A consecutive string of bits (now typically 16, > 32, or 64, but formerly fewer) that can be transferred and stored as > a unit./machine word/: see /machine word/ n. at machine n. Compounds > 2 <https://www-oed-com.ezproxy.bpl.org/view/Entry/111850#eid38480019>. > > 1946 H. H. Goldstine & J. Von Neumann in J. von Neumann /Coll. Wks./ > (1963) V. 28 In ‘writing’ a word into the memory, it is similarly > not only the time effectively consumed in ‘writing’ which matters, > but also the time needed to ‘find’ the specified location in the memory. Since we're searching the OED, there are a couple of others. The /machine word/ mentioned above has: machine word n. Computing: a word of the length appropriate for a particular fixed word-length computer. 1954 Computers & Automation Dec. 16/1 Machine word, a unit of information of a standard number of characters, which a machine regularly handles in each register. This makes the meaning clearer, I think, though it doesn't seem to be a change in meaning. On Thursday, 8 September 2022 at 17:16:35 -0400, Noel Chiappa wrote: > > Looking a little further, Turing's ACE Report, from 1946, uses the > term (section 4, pg. 25; "minor cycle, or word"). My copy, the one > edited by Carpenter and Doran, has a note #1 by them, "Turing seems > to be the first user of 'word' with this meaning." I have Brian's > email, I can ask him how they came to that determination, if you'd > like. I don't see that this is the same meaning. Do you? "Minor cycle" suggests timing parameters. But it would be interesting to know whether this document pre- or postdates Goldstine and von Neumann. And since we were also talking about bits, it seems that OED has its own entry, bit, n.4: A unit of information derived from a choice between two equally probable alternatives or ‘events’; such a unit stored electronically in a computer. 1948 C. E. Shannon in Bell Syst. Techn. Jrnl. July 380 The choice of a logarithmic base corresponds to the choice of a unit for measuring information. If the base 2 is used the resulting units may be called binary digits, or more briefly bits, a word suggested by J. W. Tukey. Greg -- Sent from my desktop computer. Finger grog@lemis.com for PGP public key. See complete headers for address and phone numbers. This message is digitally signed. If your Microsoft mail program reports problems, please read http://lemis.com/broken-MUA.php [-- Attachment #2: signature.asc --] [-- Type: application/pgp-signature, Size: 163 bytes --]
> I heard that the IBM 709
> series had 36 bit words because Arthur Samuel,
> then at IBM, needed 32 bits to identify the playable squares on a
> checkerboard, plus some bits for color and kinged
To be precise, Samuel's checkers program was written for
the 701, which originated the architecture that the 709 inherited.
Note that IBM punched cards had 72 data columns plus 8
columns typically dedicated to sequence numbers. 700-series
machines supported binary IO encoded two words per row, 12
rows per card--a perfect fit to established technology. (I do
not know whether the fit was deliberate or accidental.)
As to where the byte came from, it was christened for the IBM
Stretch, aka 7020. The machine was bit-addressed and the width
of a byte was variable. Multidimensional arrays of packed bytes
could be streamed at blinding speeds. Eight bits, which synced
well with the 7020's 64-bit words, was standardized in the 360
series. The term "byte" was not used in connection with
700-series machines.
Doug
On Thu, Sep 08, 2022 at 09:33:42PM -0400, Douglas McIlroy wrote: > As to where the byte came from, it was christened for the IBM > Stretch, aka 7020. The machine was bit-addressed and the width > of a byte was variable. Huh, I did a lot of the Unix port to the ETA-10, that was the only machine that I encountered that had bit pointers. I never understood why that was a thing, Doug, do you know the rationale for bit pointers? The ETA-10 is not well known, I was part of the Lachman group: https://en.wikipedia.org/wiki/ETA10 My first job after school, I got to watch Neil toggle in the bootstrap stuff at the console. He wasn't Seymour but he was very very good. One of the more substantial people I've ever met, I would guess he has passed but if he hasn't, he would like this group of people. Whatever, Doug or anyone, why do bit pointers make sense? Why?
[-- Attachment #1: Type: text/plain, Size: 2316 bytes --] There is a certain amount of "use makes master" about word and byte length. I think DECs decision to go with 6bit for 36bit was probably fine, in the context of belief around BCD. That it turned out to be a royal pain in the neck for an 8 byte world was a bit overblown given its time and place. People found ways to exploit 4 extra bits in a word to do things. DEC provided the UUO mechanism, people coded odd things into it. If BCD had been more significant who knows how long packed ASCII might have lasted. The entire field from bletchley onwards was full of arch, fey witticisms about machine names. ending things with -AC (for automatic computer) led to SILLIAC in Sydney Uni and there was a sort of poem which included them all to MANIAC. I can't but think the neologism byte over 'bite sized chunks of a whole word' goes directly to this tendency to play with language. And, the times were ones with many strange players on many continents, fertile ground for wordplay. 8 is a useful number. 5 hole Baudot wasn't enough: with parity and cases and control signal in band data was heading to 8 irrespective. Aligning data with memory and registers makes sense. On Fri, Sep 9, 2022 at 9:35 AM Douglas McIlroy < douglas.mcilroy@dartmouth.edu> wrote: > > > I heard that the IBM 709 > > series had 36 bit words because Arthur Samuel, > > then at IBM, needed 32 bits to identify the playable squares on a > > checkerboard, plus some bits for color and kinged > > To be precise, Samuel's checkers program was written for > the 701, which originated the architecture that the 709 inherited. > > Note that IBM punched cards had 72 data columns plus 8 > columns typically dedicated to sequence numbers. 700-series > machines supported binary IO encoded two words per row, 12 > rows per card--a perfect fit to established technology. (I do > not know whether the fit was deliberate or accidental.) > > As to where the byte came from, it was christened for the IBM > Stretch, aka 7020. The machine was bit-addressed and the width > of a byte was variable. Multidimensional arrays of packed bytes > could be streamed at blinding speeds. Eight bits, which synced > well with the 7020's 64-bit words, was standardized in the 360 > series. The term "byte" was not used in connection with > 700-series machines. > > Doug [-- Attachment #2: Type: text/html, Size: 2799 bytes --]
The IBM 1620 had an unusual architecture in that it did decimal arithmetic operating on variable-length strings of BCD-encoded decimal digits. It thus didn't really have a word length at all. It also didn't have a proper ALU--it did arithmetic by table lookup. The internal code name for the machine was CADET, which was said to stand for "Can't Add, Doesn't Even Try". Arithmetic on variable-length decimal strings was a feature carried over to the System/360/370 and also the DEC VAX. Have there been other commercially sold computers without a fixed word length? -Paul W.
> Doug or anyone, why do bit pointers make sense? Why?
Bit-addressing is very helpful for manipulating characters
in a word-organized memory. The central idea of my ancient
(patented!) string macros that underlay SNOBOL was that it's
more efficient to refer to 6-bit characters as living at
bits 0,6,12,... of a 36-bit word than as being characters
0,1,2,... of the word. I've heard that this convention was
supported in hardware on the PDP-10.
In the IBM 7020 floats and ints were word-addressed. But
those addresses could be extended past the "decimal point"
to refer to bits. Bits were important. The computer was designed
in parallel with the Harvest streaming "attachment" for
NSA. Harvest was basically intended to gather statistics useful
in code-breaking, such as frequency counts and autocorrelations,
for data typically encoded in packed 5- to 8-bit characters. It
was controlled by a 20-word "setup" that specified operations on
rectangular and triangular indexing patterns in multidimensional
arrays. Going beyond statistics, one of the operations was SQML
(sequential multiple lookup) where each character was looked
up in a table that specified a replacement and a next table--a
spec for an arbitrary Turing machine that moved its tape at
byte-streaming speed!
Doug
On Sep 9, 2022, at 8:49 AM, Paul Winalski <paul.winalski@gmail.com> wrote:
>
> Have there been other commercially sold computers without a fixed word length?
Burroughs B1700? It was a bit addressable machine.
Doug McIlroy: Bit-addressing is very helpful for manipulating characters in a word-organized memory. The central idea of my ancient (patented!) string macros that underlay SNOBOL was that it's more efficient to refer to 6-bit characters as living at bits 0,6,12,... of a 36-bit word than as being characters 0,1,2,... of the word. I've heard that this convention was supported in hardware on the PDP-10. ==== Indeed it was. The DEC-10 had `byte pointers' as well as (36-bit) word addresses. A byte pointer comprised an address, a starting bit within the addressed word, and a length. There were instructions to load and store an addressed byte to or from a register, and to do same while incrementing the pointer to the next byte, wrapping the start of the next word if the remainder of the current word was too small. (Bytes couldn't span word boundaries.) Byte pointers were used routinely to process text. ASCII text was conventionally stored as five 7-bit bytes packed into each 36-bit word. The leftover bit was used by some programs as a flag to mean these five characters (usually the first of a line) were special, e.g. represented a five-decimal-digit line number. Byte pointers were used to access Sixbit characters as well (each character six bits, so six to the word, character set comprising the 64-character subset of ASCII starting with 0 == space). Norman Wilson Toronto ON (spent about four years playing with TOPS-10 before growing up to play with UNIX)
Paul Winalski and Bakul Shah commented on bit addressable machines on the TUHS list recently. From Blaauw and Brooks' excellent Computer Architecture book http://www.math.utah.edu/pub/tex/bib/master.html#Blaauw:1997:CAC on page 98, I find >> ... >> The earliest computer with bit resolution is the [IBM 7030] Stretch. >> The Burroughs B1700 (1972) and CDC STAR100 (1973) are later examples. >> >> Bit resolution is costly in format space, since it uses a maximum >> number of bits for address and length specification. Sharpening >> resolution from the byte to the bit costs the same as increasing >> address-space size eight-fold. >> >> Since almost all storage realizations are organized as matrices, >> bit resolution is also expensive in time or equipment. >> ... ------------------------------------------------------------------------------- - Nelson H. F. Beebe Tel: +1 801 581 5254 - - University of Utah - - Department of Mathematics, 110 LCB Internet e-mail: beebe@math.utah.edu - - 155 S 1400 E RM 233 beebe@acm.org beebe@computer.org - - Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ - -------------------------------------------------------------------------------
On Sep 9, 2022, at 12:39 PM, Nelson H. F. Beebe <beebe@math.utah.edu> wrote: > > Paul Winalski and Bakul Shah commented on bit addressable machines > on the TUHS list recently. From Blaauw and Brooks' excellent > Computer Architecture book > > http://www.math.utah.edu/pub/tex/bib/master.html#Blaauw:1997:CAC > > on page 98, I find > >>> ... >>> The earliest computer with bit resolution is the [IBM 7030] Stretch. >>> The Burroughs B1700 (1972) and CDC STAR100 (1973) are later examples. >>> >>> Bit resolution is costly in format space, since it uses a maximum >>> number of bits for address and length specification. Sharpening >>> resolution from the byte to the bit costs the same as increasing >>> address-space size eight-fold. >>> >>> Since almost all storage realizations are organized as matrices, >>> bit resolution is also expensive in time or equipment. >>> ... And yet according to Wilner's article "the B1700 appears to require less than half the memory needed by byte-oriented systems to represent programs. Comparisons with word-oriented systems are even more favorable." Figure 9 shows sample sizes for Cobol, Fortran and RPG II programs comparing B1700 code sizes with other systems. I was surprised to see this but didn't look further. https://dl.acm.org/doi/pdf/10.1145/1479992.1480060 From the same paper DESIGN OBJECTIVE Burroughs B1700 is a protean attempt to completely vanquish procrustean structures, to give 100 percent variability, or the appearance of no inherent structure. Without inherent structure, any definable language can be efficiently used for computing. There are no word sizes or data formats—operands may be any shape or size, without loss of efficiency; there are no a priori instructions—machine operations may be any function, in any form, without loss of efficiency; configuration limits, while not totally removable, can be made to exist only as points of "graceful degradation" of performance; modularity may be increased, to allow miniconfigurations and supercomputers using the same components.
[-- Attachment #1: Type: text/plain, Size: 2724 bytes --] On Fri, 9 Sept 2022 at 16:28, Bakul Shah <bakul@iitbombay.org> wrote: > On Sep 9, 2022, at 12:39 PM, Nelson H. F. Beebe <beebe@math.utah.edu> > wrote: > > > > Paul Winalski and Bakul Shah commented on bit addressable machines > > on the TUHS list recently. From Blaauw and Brooks' excellent > > Computer Architecture book > > > > http://www.math.utah.edu/pub/tex/bib/master.html#Blaauw:1997:CAC > > > > on page 98, I find > > > >>> ... > >>> The earliest computer with bit resolution is the [IBM 7030] Stretch. > >>> The Burroughs B1700 (1972) and CDC STAR100 (1973) are later examples. > >>> > >>> Bit resolution is costly in format space, since it uses a maximum > >>> number of bits for address and length specification. Sharpening > >>> resolution from the byte to the bit costs the same as increasing > >>> address-space size eight-fold. > >>> > >>> Since almost all storage realizations are organized as matrices, > >>> bit resolution is also expensive in time or equipment. > >>> ... > > And yet according to Wilner's article "the B1700 appears to > require less than half the memory needed by byte-oriented > systems to represent programs. Comparisons with word-oriented > systems are even more favorable." > > Figure 9 shows sample sizes for Cobol, Fortran and RPG II programs > comparing B1700 code sizes with other systems. I was surprised to > see this but didn't look further. > > https://dl.acm.org/doi/pdf/10.1145/1479992.1480060 > > From the same paper > > DESIGN OBJECTIVE > > Burroughs B1700 is a protean attempt to completely vanquish > procrustean structures, to give 100 percent variability, or > the appearance of no inherent structure. Without inherent > structure, any definable language can be efficiently used > for computing. There are no word sizes or data > formats—operands may be any shape or size, without loss of > efficiency; there are no a priori instructions—machine > operations may be any function, in any form, without loss > of efficiency; configuration limits, while not totally > removable, can be made to exist only as points of "graceful > degradation" of performance; modularity may be increased, > to allow miniconfigurations and supercomputers using the > same components. > > The level of florid language in that paper is truly impressive. This appears to be an early implementation of intermediate language representation. I gather by its relative level of success (I had not heard of it until now) that it suffered from many of the common performance problems of such machines (Java bytecode, the Transmeta CPU, etc.) and did not succeed in the marketplace. -Henry [-- Attachment #2: Type: text/html, Size: 3670 bytes --]
On Fri, 9 Sep 2022, Bakul Shah wrote: > And yet according to Wilner's article "the B1700 appears to require less > than half the memory needed by byte-oriented systems to represent > programs. Comparisons with word-oriented systems are even more > favorable." The Burroughs series were beautiful machines; the hardware ran native ALGOL (and thus were perfect); my favourite "B" still remains the B1500. Things went downhill after the unholy alliance betwixt M$ and Inhell... > Figure 9 shows sample sizes for Cobol, Fortran and RPG II programs > comparing B1700 code sizes with other systems. I was surprised to > see this but didn't look further. COBOL? FORTRAN? RPG? Those are all swear words to me :-) -- Dave
On 9/9/22, Norman Wilson <norman@oclsc.org> wrote:
>
> IThe DEC-10 had `byte pointers' as well as
> (36-bit) word addresses. A byte pointer comprised an address,
> a starting bit within the addressed word, and a length.
> There were instructions to load and store an addressed byte
> to or from a register, and to do same while incrementing
> the pointer to the next byte, wrapping the start of the next
> word if the remainder of the current word was too small.
> (Bytes couldn't span word boundaries.)
That very closely resembles a field-reference expression in BLISS,
which has the syntax:
addr<start, len, ext> where:
addr is an expression whose value is the address of the BLISS word
containing the field
start is an expression whose value is the stating offset within that
word of the field
len is an expression giving the length of the field in bits
ext is an expression whose value is either 0 0r 1 that tells how to
pad out the field to a full BLISS word size
It's probably no accident that BLISS field expressions match this
feature of the DEC-10 hardware.
-Paul W.
Anecdote prompted by the advent of Burroughs in this thread: At the 1968 NATO conference on Software Engineering, the discussion turned to language design strategies. I noted that the design of Algol 68, for example, presupposed a word-based machine, whereupon Burroughs architect Bob Barton brought the house down with the remark, "In the beginning was the Word, all right--but it was not a fixed number of bits!" [Algol 68's presupposition is visible in declarations like "long long long ... int". An implementation need support only a limited number of "longs", but each supported variety must have a definite maximum value, which is returned by an "environment enquiry" function. For amusement, consider the natural idea of implementing the longest variety with bignums.] Doug
[-- Attachment #1: Type: text/plain, Size: 1183 bytes --] On Sun, Sep 11, 2022 at 9:31 AM Douglas McIlroy < douglas.mcilroy@dartmouth.edu> wrote: > [Algol 68's presupposition is visible in declarations like "long long > long ... int". An implementation need support only a limited number of > "longs", To clarify things for A68 n00bs, you can *write* an indefinite number of "longs", but after an implementation-defined point they don't add any further range (and/or precision in the case of floats). This is true in a truncated way in C as well, where "long" (which is the same as "long int") may be the same as "int". > but each supported variety must have a definite maximum > value, which is returned by an "environment enquiry" function. For > amusement, consider the natural idea of implementing the longest > variety with bignums.] > In actual bignum implementations, there is a biggest bignum, but it may not be possible to actually construct it. In GMP using 64-bit digits, it is theoretically 2^((2^31 - 1) * (2^64 - 1)), or 2^39,614,081,238,685,424,720,914,939,905, which is Very Big Indeed. But there is nothing preventing an implementer from watching intermediate results and throwing an exception if they get too big. [-- Attachment #2: Type: text/html, Size: 2160 bytes --]
On Sep 11, 2022, at 6:30 AM, Douglas McIlroy <douglas.mcilroy@dartmouth.edu> wrote:
>
> [Algol 68's presupposition is visible in declarations like "long long
> long ... int". An implementation need support only a limited number of
> "longs", but each supported variety must have a definite maximum
> value, which is returned by an "environment enquiry" function. For
> amusement, consider the natural idea of implementing the longest
> variety with bignums.]
It would be natural to use a Kleene star to represent an arbitrarily
long string of LONGs -- "long* int" -- though AFAIK Algol68 doesn't
do bignums. Weirdly even most 21st century progamming languages do
not provide built-in support for bignums!
C's INT_MAX, LONG_MAX etc are kind of an environment enquiry...
On 9/11/22, Bakul Shah <bakul@iitbombay.org> wrote:
>
> C's INT_MAX, LONG_MAX etc are kind of an environment enquiry...
What size to use in C for int and long (pointers had to be 64-bit; no
issue there) was a big headache for DEC in the migration of Unix
(Ultrix) from VAX to Alpha. The first C compiler implementation used
ILP64 (64 bits for int, long, and pointer) and ran afoul of a lot of
code that assumed an int was 32 bits. ILP64 vs. LP64 because as
divisive an issue as the big-endian vs. little-endian debate.
-Paul W.
On Sun, 11 Sep 2022, Paul Winalski wrote:
> On 9/11/22, Bakul Shah <bakul@iitbombay.org> wrote:
>>
>> C's INT_MAX, LONG_MAX etc are kind of an environment enquiry...
>
> What size to use in C for int and long (pointers had to be 64-bit; no
> issue there) was a big headache for DEC in the migration of Unix
> (Ultrix) from VAX to Alpha. The first C compiler implementation used
> ILP64 (64 bits for int, long, and pointer) and ran afoul of a lot of
> code that assumed an int was 32 bits. ILP64 vs. LP64 because as
> divisive an issue as the big-endian vs. little-endian debate.
>
> -Paul W.
When I first wrote C code I *assumed* things that were only necessarily
true on the platform I learned on (char=8, short=16, long=32; int=16,
pointer could be either 16 or 32, requiring the addition of the keywords
"near" and "far").
(I'm glad C99 introduced <stdint.h>, but on that platform the only C99
compiler is OpenWatcom.)
-uso.
At 09:12 PM 9/8/2022, Larry McVoy wrote: >My first job after school, I got to watch Neil toggle in the bootstrap >stuff at the console. He wasn't Seymour but he was very very good. >One of the more substantial people I've ever met, I would guess he >has passed but if he hasn't, he would like this group of people. He passed away in 2007: https://www.msthalloffame.org/neil_robert_lincoln.htm Lincoln died on Jan. 26, 2007, at age 69. He was a member of the American Institute of Astronautics and Aeronautics and holds nine U.S. patents for computer hardware. He was a Distinguished Lecturer of the Institute of Electronics and Electrical Engineers, a National Lecturer of the Association for Computer Machinery and on the Advisory Committee on Science and Technology Centers for the National Science Foundation. - John
[-- Attachment #1: Type: text/plain, Size: 1398 bytes --] BTW, IBM’s “Computer Museum” has (had?) a (the?) Stretch and Harvest. The “museum” is a warehouse full of old stuff. On Thu, Sep 8, 2022 at 9:35 PM Douglas McIlroy < douglas.mcilroy@dartmouth.edu> wrote: > > I heard that the IBM 709 > > series had 36 bit words because Arthur Samuel, > > then at IBM, needed 32 bits to identify the playable squares on a > > checkerboard, plus some bits for color and kinged > > To be precise, Samuel's checkers program was written for > the 701, which originated the architecture that the 709 inherited. > > Note that IBM punched cards had 72 data columns plus 8 > columns typically dedicated to sequence numbers. 700-series > machines supported binary IO encoded two words per row, 12 > rows per card--a perfect fit to established technology. (I do > not know whether the fit was deliberate or accidental.) > > As to where the byte came from, it was christened for the IBM > Stretch, aka 7020. The machine was bit-addressed and the width > of a byte was variable. Multidimensional arrays of packed bytes > could be streamed at blinding speeds. Eight bits, which synced > well with the 7020's 64-bit words, was standardized in the 360 > series. The term "byte" was not used in connection with > 700-series machines. > > Doug > -- ===== nygeek.net mindthegapdialogs.com/home <https://www.mindthegapdialogs.com/home> [-- Attachment #2: Type: text/html, Size: 2016 bytes --]