* Re: [TUHS] core
[not found] <mailman.1.1529175600.3826.tuhs@minnie.tuhs.org>
@ 2018-06-16 22:14 ` Johnny Billquist
2018-06-16 22:38 ` Arthur Krewat
2018-06-17 0:15 ` Clem cole
0 siblings, 2 replies; 73+ messages in thread
From: Johnny Billquist @ 2018-06-16 22:14 UTC (permalink / raw)
To: tuhs
On 2018-06-16 21:00, Clem Cole <clemc@ccc.com> wrote:
> below... > On Sat, Jun 16, 2018 at 9:37 AM, Noel Chiappa
<jnc@mercury.lcs.mit.edu> wrote:
>>
>> Let's start with the UNIBUS. Why does it have only 18 address lines? (I
>> have
>> this vague memory of a quote from Gordon Bell admitting that was a mistake,
>> but I don't recall exactly where I saw it.)
> I think it was part of the same paper where he made the observation that
> the greatest mistake an architecture can have is too few address bits.
I think the paper you both are referring to is the "What have we learned
from the PDP-11", by Gordon Bell and Bill Strecker in 1977.
https://gordonbell.azurewebsites.net/Digital/Bell_Strecker_What_we%20_learned_fm_PDP-11c%207511.pdf
There is some additional comments in
https://gordonbell.azurewebsites.net/Digital/Bell_Retrospective_PDP11_paper_c1998.htm
> My understanding is that the problem was that UNIBUS was perceived as an
> I/O bus and as I was pointing out, the folks creating it/running the team
> did not value it, so in the name of 'cost', more bits was not considered
> important.
Hmm. I'm not aware of anyone perceiving the Unibus as an I/O bus. It was
very clearly designed a the system bus for all needs by DEC, and was
used just like that until the 11/70, which introduced a separate memory
bus. In all previous PDP-11s, both memory and peripherals were connected
on the Unibus.
Why it only have 18 bits, I don't know. It might have been a reflection
back on that most things at DEC was either 12 or 18 bits at the time,
and 12 was obviously not going to cut it. But that is pure speculation
on my part.
But, if you read that paper again (the one from Bell), you'll see that
he was pretty much a source for the Unibus as well, and the whole idea
of having it for both memory and peripherals. But that do not tell us
anything about why it got 18 bits. It also, incidentally have 18 data
bits, but that is mostly ignored by all systems. I believe the KS-10
made use of that, though. And maybe the PDP-15. And I suspect the same
would be true for the address bits. But neither system was probably
involved when the Unibus was created, but made fortuitous use of it when
they were designed.
> I used to know and work with the late Henk Schalke, who ran Unibus (HW)
> engineering at DEC for many years. Henk was notoriously frugal (we might
> even say 'cheap'), so I can imagine that he did not want to spend on
> anything that he thought was wasteful. Just like I retold the
> Amdahl/Brooks story of the 8-bit byte and Amdahl thinking Brooks was nuts;
> I don't know for sure, but I can see that without someone really arguing
> with Henk as to why 18 bits was not 'good enough.' I can imagine the
> conversation going something like: Someone like me saying: *"Henk, 18 bits
> is not going to cut it."* He might have replied something like: *"Bool
> sheet *[a dutchman's way of cursing in English], *we already gave you two
> more bit than you can address* (actually he'd then probably stop mid
> sentence and translate in his head from Dutch to English - which was always
> interesting when you argued with him).
Quite possible. :-)
> Note: I'm not blaming Henk, just stating that his thinking was very much
> that way, and I suspect he was not not alone. Only someone like Gordon and
> the time could have overruled it, and I don't think the problems were
> foreseen as Noel notes.
Bell in retrospect thinks that they should have realized this problem,
but it would appear they really did not consider it at the time. Or
maybe just didn't believe in what they predicted.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt@softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
2018-06-16 22:14 ` [TUHS] core Johnny Billquist
@ 2018-06-16 22:38 ` Arthur Krewat
2018-06-17 0:15 ` Clem cole
1 sibling, 0 replies; 73+ messages in thread
From: Arthur Krewat @ 2018-06-16 22:38 UTC (permalink / raw)
To: tuhs
On 6/16/2018 6:14 PM, Johnny Billquist wrote:
> I believe the KS-10 made use of that, though.
Absolutely.
The 36-bit transfer mode requires one memory cycle
(rather than two) for each word transferred. In 36-bit
mode, the PDP-11 word count must be even (an odd word count
would hang the UBA) . Only one device on each UBA can do
36-bit transfers; this is because the UBA has only one
buffer to hold the left 18 bits, while the right 18 bits
come across the UNIBUS. On the 2020 system, the disk is the
only device using the 36-bit transfer mode.
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
2018-06-16 22:14 ` [TUHS] core Johnny Billquist
2018-06-16 22:38 ` Arthur Krewat
@ 2018-06-17 0:15 ` Clem cole
2018-06-17 1:50 ` Johnny Billquist
1 sibling, 1 reply; 73+ messages in thread
From: Clem cole @ 2018-06-17 0:15 UTC (permalink / raw)
To: Johnny Billquist; +Cc: tuhs
Hmm. I think you are trying to put to fine a point on it. Having had this conversation with a number of folks who were there, you’re right that the ability to memory on the Unibus at the lower end was clearly there but I’ll take Dave Cane and Henk’s word for it as calling it an IO bus who were the primary HW folks behind a lot of it. Btw it’s replacement, Dave’s BI, was clearly designed with IO as the primary point. It was supposed to be open sourced in today’s parlance but DEC closed it at the end. The whole reason was to have an io bus the 3rd parties could build io boards around. Plus, By then DEC started doing split transactions on the memory bus ala the SMI which was supposed to be private. After the BI mess, And then By the time of Alpha BI morphed into PCI which was made open and of course is now Intel’s PCIe. But that said, even today we need to make large memory windows available thru it for things like messaging and GPUs - so the differences can get really squishy. OPA2 has a full MMU and can do a lot of cache protocols just because the message HW really looks to memory a whole lot like a CPU core. And I expect future GPUs to work the same way.
And at this point (on die) the differences between a memory and io bus are often driven by power and die size. The Memory bus folks are often willing to pay more to keep the memory heiarchary a bit more sane. IO busses will often let the SW deal with consistency in return for massive scale.
Sent from my PDP-7 Running UNIX V0 expect things to be almost but not quite.
> On Jun 16, 2018, at 6:14 PM, Johnny Billquist <bqt@update.uu.se> wrote:
>
>> On 2018-06-16 21:00, Clem Cole <clemc@ccc.com> wrote:
>> below... > On Sat, Jun 16, 2018 at 9:37 AM, Noel Chiappa
> <jnc@mercury.lcs.mit.edu> wrote:
>>>
>>> Let's start with the UNIBUS. Why does it have only 18 address lines? (I
>>> have
>>> this vague memory of a quote from Gordon Bell admitting that was a mistake,
>>> but I don't recall exactly where I saw it.)
>> I think it was part of the same paper where he made the observation that
>> the greatest mistake an architecture can have is too few address bits.
>
> I think the paper you both are referring to is the "What have we learned from the PDP-11", by Gordon Bell and Bill Strecker in 1977.
>
> https://gordonbell.azurewebsites.net/Digital/Bell_Strecker_What_we%20_learned_fm_PDP-11c%207511.pdf
>
> There is some additional comments in https://gordonbell.azurewebsites.net/Digital/Bell_Retrospective_PDP11_paper_c1998.htm
>
>> My understanding is that the problem was that UNIBUS was perceived as an
>> I/O bus and as I was pointing out, the folks creating it/running the team
>> did not value it, so in the name of 'cost', more bits was not considered
>> important.
>
> Hmm. I'm not aware of anyone perceiving the Unibus as an I/O bus. It was very clearly designed a the system bus for all needs by DEC, and was used just like that until the 11/70, which introduced a separate memory bus. In all previous PDP-11s, both memory and peripherals were connected on the Unibus.
>
> Why it only have 18 bits, I don't know. It might have been a reflection back on that most things at DEC was either 12 or 18 bits at the time, and 12 was obviously not going to cut it. But that is pure speculation on my part.
>
> But, if you read that paper again (the one from Bell), you'll see that he was pretty much a source for the Unibus as well, and the whole idea of having it for both memory and peripherals. But that do not tell us anything about why it got 18 bits. It also, incidentally have 18 data bits, but that is mostly ignored by all systems. I believe the KS-10 made use of that, though. And maybe the PDP-15. And I suspect the same would be true for the address bits. But neither system was probably involved when the Unibus was created, but made fortuitous use of it when they were designed.
>
>> I used to know and work with the late Henk Schalke, who ran Unibus (HW)
>> engineering at DEC for many years. Henk was notoriously frugal (we might
>> even say 'cheap'), so I can imagine that he did not want to spend on
>> anything that he thought was wasteful. Just like I retold the
>> Amdahl/Brooks story of the 8-bit byte and Amdahl thinking Brooks was nuts;
>> I don't know for sure, but I can see that without someone really arguing
>> with Henk as to why 18 bits was not 'good enough.' I can imagine the
>> conversation going something like: Someone like me saying: *"Henk, 18 bits
>> is not going to cut it."* He might have replied something like: *"Bool
>> sheet *[a dutchman's way of cursing in English], *we already gave you two
>> more bit than you can address* (actually he'd then probably stop mid
>> sentence and translate in his head from Dutch to English - which was always
>> interesting when you argued with him).
>
> Quite possible. :-)
>
>> Note: I'm not blaming Henk, just stating that his thinking was very much
>> that way, and I suspect he was not not alone. Only someone like Gordon and
>> the time could have overruled it, and I don't think the problems were
>> foreseen as Noel notes.
>
> Bell in retrospect thinks that they should have realized this problem, but it would appear they really did not consider it at the time. Or maybe just didn't believe in what they predicted.
>
> Johnny
>
> --
> Johnny Billquist || "I'm on a bus
> || on a psychedelic trip
> email: bqt@softjar.se || Reading murder books
> pdp is alive! || tryin' to stay hip" - B. Idol
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
2018-06-17 0:15 ` Clem cole
@ 2018-06-17 1:50 ` Johnny Billquist
2018-06-17 10:33 ` Ronald Natalie
2018-06-20 16:55 ` Paul Winalski
0 siblings, 2 replies; 73+ messages in thread
From: Johnny Billquist @ 2018-06-17 1:50 UTC (permalink / raw)
To: Clem cole; +Cc: tuhs
On 2018-06-17 02:15, Clem cole wrote:
> Hmm. I think you are trying to put to fine a point on it. Having had this conversation with a number of folks who were there, you’re right that the ability to memory on the Unibus at the lower end was clearly there but I’ll take Dave Cane and Henk’s word for it as calling it an IO bus who were the primary HW folks behind a lot of it. Btw it’s replacement, Dave’s BI, was clearly designed with IO as the primary point. It was supposed to be open sourced in today’s parlance but DEC closed it at the end. The whole reason was to have an io bus the 3rd parties could build io boards around. Plus, By then DEC started doing split transactions on the memory bus ala the SMI which was supposed to be private. After the BI mess, And then By the time of Alpha BI morphed into PCI which was made open and of course is now Intel’s PCIe. But that said, even today we need to make large memory windows available thru it for things like messaging and GPUs - so the differences can get really squishy. OPA2 has a full MMU and can do a lot of cache protocols just because the message HW really looks to memory a whole lot like a CPU core. And I expect future GPUs to work the same way.
Well, it is an easy observable fact that before the PDP-11/70, all
PDP-11 models had their memory on the Unibus. So it was not only "an
ability at the lower end", but the only option for a number of years.
The need to grow beyond 18 bit addresses, as well as speed demands,
drove the PDP-11/70 to abandon the Unibus for memory. And this was then
also the case for the PDP-11/44, PDP-11/24, PDP-11/84 and PDP-11/94,
which also had 22-bit addressing of memory, which then followed the
PDP-11/70 in having memory separated from the Unibus. All other Unibus
machines had the memory on the Unibus.
After Unibus (if we skip Q-bus) you had SBI, which was also used both
for controllers (well, mostly bus adaptors) and memory, for the
VAX-11/780. However, evaluation of where the bottleneck was on that
machine led to the memory being moved away from SBI for the VAX-86x0
machines. SBI never was used much for any direct controllers, but
instead you had Unibus adapters, so here the Unibus was used as a pure
I/O bus.
And BI came after that. (I'm ignoring the CMI of the VAX-11/750 and
others here...)
By the time of the VAX, the Unibus was clearly only an I/O bus (for VAX
and high end PDP-11s). No VAX ever had memory on the Unibus. But the
Unibus was not design with the VAX in mind.
But unless my memory fails me, VAXBI have CPUs, memory and bus adapters
as the most common kind of devices (or "nodes" as I think they are
called) on BI. Which really helped when you started doing multiprocessor
systems, since you then had a shared bus for all memory and CPU
interconnect. You could also have Unibus adapters on VAXBI, but it was
never common.
And after VAXBI you got XMI. Which is also what early large Alphas used.
Also with CPUs, memory and bus adapters. And in XMI machines, you
usually have VAXBI adapters for peripherals, and of course on an XMI
machine, you would not hook up CPUs or memory on the VAXBI, so in an XMI
machine, VAXBI became de facto an I/O bus.
Not sure there is any relationship at all between VAXBI and PCI, by the way.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt@softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
2018-06-17 1:50 ` Johnny Billquist
@ 2018-06-17 10:33 ` Ronald Natalie
2018-06-20 16:55 ` Paul Winalski
1 sibling, 0 replies; 73+ messages in thread
From: Ronald Natalie @ 2018-06-17 10:33 UTC (permalink / raw)
To: TUHS main list
>
> Well, it is an easy observable fact that before the PDP-11/70, all PDP-11 models had their memory on the Unibus. So it was not only "an ability at the lower end", b
That’s not quite true. While the 18 bit addressed machines all had their memory directly accessible from the Unibus, the 45/50/55 had a separate bus for the (then new) semiconductor (bipolar or MOS) memory.
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
2018-06-17 1:50 ` Johnny Billquist
2018-06-17 10:33 ` Ronald Natalie
@ 2018-06-20 16:55 ` Paul Winalski
2018-06-20 21:35 ` Johnny Billquist
1 sibling, 1 reply; 73+ messages in thread
From: Paul Winalski @ 2018-06-20 16:55 UTC (permalink / raw)
To: Johnny Billquist; +Cc: tuhs
On 6/16/18, Johnny Billquist <bqt@update.uu.se> wrote:
>
> After Unibus (if we skip Q-bus) you had SBI, which was also used both
> for controllers (well, mostly bus adaptors) and memory, for the
> VAX-11/780. However, evaluation of where the bottleneck was on that
> machine led to the memory being moved away from SBI for the VAX-86x0
> machines. SBI never was used much for any direct controllers, but
> instead you had Unibus adapters, so here the Unibus was used as a pure
> I/O bus.
You forgot the Massbus--commonly used for disk I/O.
SBI on the 11/780 connected the CPU (CPUs, for the 11/782), memory
controller, Unibus controllers, Massbus controllers, and CI
controller.
CI (computer interconnect) was the communications medium for
VAXclusters (including the HSC50 intelligent disk controller).
DEC later introduced another interconnect system called BI, intended
to replace Unibus and Massbus.
As we used to say in our software group about all the various busses
and interconnects:
E Unibus Plurum
-Paul W.
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
2018-06-20 16:55 ` Paul Winalski
@ 2018-06-20 21:35 ` Johnny Billquist
2018-06-20 22:24 ` Paul Winalski
0 siblings, 1 reply; 73+ messages in thread
From: Johnny Billquist @ 2018-06-20 21:35 UTC (permalink / raw)
To: Paul Winalski; +Cc: tuhs
On 2018-06-20 18:55, Paul Winalski wrote:
> On 6/16/18, Johnny Billquist <bqt@update.uu.se> wrote:
>>
>> After Unibus (if we skip Q-bus) you had SBI, which was also used both
>> for controllers (well, mostly bus adaptors) and memory, for the
>> VAX-11/780. However, evaluation of where the bottleneck was on that
>> machine led to the memory being moved away from SBI for the VAX-86x0
>> machines. SBI never was used much for any direct controllers, but
>> instead you had Unibus adapters, so here the Unibus was used as a pure
>> I/O bus.
>
> You forgot the Massbus--commonly used for disk I/O.
I didn't try for pure I/O buses. But yes, the Massbus certainly also
existed.
> SBI on the 11/780 connected the CPU (CPUs, for the 11/782), memory
> controller, Unibus controllers, Massbus controllers, and CI
> controller.
Right.
> CI (computer interconnect) was the communications medium for
> VAXclusters (including the HSC50 intelligent disk controller).
But CI I wouldn't even call a bus.
> DEC later introduced another interconnect system called BI, intended
> to replace Unibus and Massbus.
I think that is where my comment started.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt@softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
@ 2018-06-21 22:44 Nelson H. F. Beebe
2018-06-21 23:07 ` Grant Taylor via TUHS
0 siblings, 1 reply; 73+ messages in thread
From: Nelson H. F. Beebe @ 2018-06-21 22:44 UTC (permalink / raw)
To: tuhs
The discussion of the last few days about the Colossus is getting
somewhat off-topic for Unix heritage and history, but perhaps this
list of titles (truncated to 80 characters) and locations may be of
interest to some TUHS list readers.
------------------------------------------------------------------------
Papers and book chapters about the Colossus at Bletchley Park
Journal- and subject specific Bibliography files are found at
http://www.math.utah.edu/pub/tex/bib/NAME.bib
and author-specific ones under
http://www.math.utah.edu/pub/bibnet/authors/N/
where N is the first letter of the bibliography filename.
The first line in each group contains the bibliography filename, and
the citation label of the cited publication. DOIs are supplied
whenever they are available. Paragraphs are sorted by filename.
adabooks.bib Randell:1976:C
The COLOSSUS
annhistcomput.bib Good:1979:EWC
Early Work on Computers at Bletchley
https://doi.org/10.1109/MAHC.1979.10011
annhistcomput.bib deLeeuw:2007:HIS
Tunny and Colossus: breaking the Lorenz Schl{\"u}sselzusatz traffic
https://www.sciencedirect.com/science/article/pii/B978044451608450016X
babbage-charles.bib Randell:1976:C
The COLOSSUS
cacm2010.bib Anderson:2013:MNF
Max Newman: forgotten man of early British computing
https://doi.org/10.1145/2447976.2447986
computer1990.bib Anonymous:1996:UWI
Update: WW II Colossus computer is restored to operation
cryptography.bib Good:1979:EWC
Early Work on Computers at Bletchley
cryptography1990.bib Anonymous:1997:CBP
The Colossus of Bletchley Park, IEEE Rev., vol. 41, no. 2, pp. 55--59 [Book Revi
https://doi.org/10.1109/MAHC.1997.560758
cryptography2000.bib Rojas:2000:FCH
The first computers: history and architectures
cryptography2010.bib Copeland:2010:CBG
Colossus: Breaking the German `Tunny' Code at Bletchley Park. An Illustrated His
cryptologia.bib Michie:2002:CBW
Colossus and the Breaking of the Wartime ``Fish'' Codes
debroglie-louis.bib Homberger:1970:CMN
The Cambridge mind: ninety years of the booktitle Cambridge Review, 1879--1969
dijkstra-edsger-w.bib Randell:1980:C
The COLOSSUS
dyson-freeman-j.bib Brockman:2015:WTA
What to think about machines that think: today's leading thinkers on the age of
fparith.bib Randell:1977:CGC
Colossus: Godfather of the Computer
ieeeannhistcomput.bib Anonymous:1997:CBP
The Colossus of Bletchley Park, IEEE Rev., vol. 41, no. 2, pp. 55--59 [Book Revi
https://doi.org/10.1109/MAHC.1997.560758
lncs1997b.bib Carter:1997:BLC
The Breaking of the Lorenz Cipher: An Introduction to the Theory behind the Oper
lncs2000.bib Sale:2000:CGL
Colossus and the German Lorenz Cipher --- Code Breaking in WW II
master.bib Randell:1977:CGC
Colossus: Godfather of the Computer
mathcw.bib Randell:1982:ODC
The Origins of Digital Computers: Selected Papers
http://dx.doi.org/10.1007/978-3-642-61812-3
rutherford-ernest.bib Homberger:1970:CMN
The Cambridge mind: ninety years of the booktitle Cambridge Review, 1879--1969
rutherfordj.bib Copeland:2010:CBG
Colossus: Breaking the German `Tunny' Code at Bletchley Park. An Illustrated His
sigcse1990.bib Plimmer:1998:MIW
Machines invented for WW II code breaking
https://doi.org/10.1145/306286.306309
turing-alan-mathison.bib Randell:1976:C
The COLOSSUS
von-neumann-john.bib Randell:1982:ODC
The Origins of Digital Computers: Selected Papers
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe@math.utah.edu -
- 155 S 1400 E RM 233 beebe@acm.org beebe@computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
@ 2018-06-21 17:40 Noel Chiappa
0 siblings, 0 replies; 73+ messages in thread
From: Noel Chiappa @ 2018-06-21 17:40 UTC (permalink / raw)
To: tuhs; +Cc: jnc
> From: Tim Bradshaw
> There is a paper, by Flowers, in the Annals of the History of Computing
> which discusses a lot of this. I am not sure if it's available online
Yes:
http://www.ivorcatt.com/47c.htm
(It's also available behind the IEEE pay-wall.) That issue of the Annals (5/3)
has a couple of articles about Colossus; the Coombs one on the design is
available here:
http://www.ivorcatt.com/47d.htm
but doesn't have much on the reliability issues; the Chandler one on
maintenance has more, but alas is only available behind the paywall:
https://www.computer.org/csdl/mags/an/1983/03/man1983030260-abs.html
Brian Randell did a couple of Colossus articles (in "History of Computing in
the 20th Century" and "Origin of Digital Computers") for which he interviewed
Flowers and others, there may be details there too.
Noel
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
@ 2018-06-21 17:07 Nelson H. F. Beebe
0 siblings, 0 replies; 73+ messages in thread
From: Nelson H. F. Beebe @ 2018-06-21 17:07 UTC (permalink / raw)
To: tuhs
Tim Bradshaw <tfb@tfeb.org> commented on a paper by Tommy Flowers on
the design of the Colossus: here is the reference:
Thomas H. Flowers, The Design of Colossus, Annals of the
History of Computing 5(3) 239--253 July/August 1983
https://doi.org/10.1109/MAHC.1983.10079
Notice that it appeared in the Annnals..., not the successor journal
IEEE Annals....
There is a one-column obituary of Tommy Flowers at
http://doi.ieeecomputersociety.org/10.1109/MC.1998.10137
Last night, I finished reading this recent book:
Thomas Haigh and Mark (Peter Mark) Priestley and Crispin Rope
ENIAC in action: making and remaking the modern computer
MIT Press 2016
ISBN 0-262-03398-4
https://doi.org/10.7551/mitpress/9780262033985.001.0001
It has extensive commentary about the ENIAC at the Moore School of
Engineering at the University of Pennsylvania in Philadelphia, PA. Its
construction began in 1943, with a major instruction set redesign in
1948, and was shutdown permanently on 2 October 1955 at 23:45.
The book notes that poor reliability of vacuum tubes and thousands of
soldered connections was a huge problem, and in the early years, only
about 1 hour out of 24 was devoted to useful runs; the rest of the
time was used for debuggin, problem setup (which required wiring
plugboards), testing, and troubleshooting. Even so, runs generally
had to be repeated to verify that the same answers could be obtained:
often, they differed.
The book also reports that reliability was helped by never turning off
power: tubes were more susceptible to failure when power was restored.
The book reports that reliability of the ENIAC improved significantly
when on 28 April 1948, Nick Metropolis (co-inventor of the famous
Monte Carlo method) had the clock rate reduced from 100kHz to 60kHz.
It was only several years later that, with vacuum tube manufacturing
improvements, the clock rate was eventually moved back to 100Khz.
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe@math.utah.edu -
- 155 S 1400 E RM 233 beebe@acm.org beebe@computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
@ 2018-06-21 13:46 Doug McIlroy
2018-06-21 16:13 ` Tim Bradshaw
0 siblings, 1 reply; 73+ messages in thread
From: Doug McIlroy @ 2018-06-21 13:46 UTC (permalink / raw)
To: tuhs
Tim Bradshaw wrote:
"Making tube (valve) machines reliable was something originally sorted out by Tommy Flowers, who understood, and convinced people with some difficulty I think, that you could make a machine with one or two thousand valves (1,600 for Mk 1, 2,400 for Mk 2) reliable enough to use, and then produced ten or eleven Colossi from 1943 on which were used to great effect in. So by the time of Whirlwind it was presumably well-understood that this was possible."
"Colossus: The Secrest of Bletchley Park's Codebreaking Computers"
by Copeland et al has little to say about reliability. But one
ex-operator remarks, "Often the machine broke down."
Whether it was the (significant) mechanical part or the electronics
that typically broke is unclear. Failures in a machine that's always
doing the same thing are easier to detect quickly than failures in
a mchine that has a varied load. Also the task at hand could fail
for many other reasons (e.g. mistranscribed messages) so there was
no presumption of correctness of results--that was determined by
reading the decrypted messages. So I think it's a stretch to
argue that reliability was known to be a manageable issue.
Doug
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
2018-06-21 13:46 Doug McIlroy
@ 2018-06-21 16:13 ` Tim Bradshaw
0 siblings, 0 replies; 73+ messages in thread
From: Tim Bradshaw @ 2018-06-21 16:13 UTC (permalink / raw)
To: Doug McIlroy; +Cc: tuhs
On 21 Jun 2018, at 14:46, Doug McIlroy <doug@cs.dartmouth.edu> wrote:
>
> Whether it was the (significant) mechanical part or the electronics
> that typically broke is unclear. Failures in a machine that's always
> doing the same thing are easier to detect quickly than failures in
> a mchine that has a varied load. Also the task at hand could fail
> for many other reasons (e.g. mistranscribed messages) so there was
> no presumption of correctness of results--that was determined by
> reading the decrypted messages. So I think it's a stretch to
> argue that reliability was known to be a manageable issue.
I think it's reasonably well-documented that Flowers understood things about making valves (tubes) reliable that were not previously known: Colossus was well over ten times larger than any previous valve-based system at the time and there was huge scepticism about making it work, to the extent that he funded the initial one substantially himself as BP wouldn't. For instance he understood that you should never turn the things off: even quite significant maintenance was done on Colossi with them on (I believe they were kept on even when the room flooded on occasion, which led to fairly exciting electrical conditions). He also did things with heaters (the heater voltage was ramped up & down to avoid thermal stresses when powering things on or off) and other tricks such as soldering the valves into their bases to avoid connector problems.
The mechanical part (the tape reader) was in fact one significant reason for Colossus: the previous thing, Heath Robinson, had used two tapes which needed to be kept in sync (or, rather, needed to be allowed to drift out of sync in a controlled way with respect to each other), and this did not work well at all as the tapes would stretch & break. Colossus generated one of the tapes (the one corresponding to the Lorenz machine's settings) electronically and would then sync itself to the tape with the message text on it.
It's also not the case that the Colossi did only one thing: they were not general-purpose machines but they were more general-purpose than they needed to be and people found all sorts of ways of getting them to compute properties they had not originally been intended to compute. I remember talking to Donald Michie about this (he was one of the members of the Newmanry which was the bit of BP where the Colossi were).
There is a paper, by Flowers, in the Annals of the History of Computing which discusses a lot of this. I am not sure if it's available online (or where my copy is).
Further, of course, you can just ask people who run one about reliability: there's a reconstructed one at TNMOC which is well worth seeing (there's a Heath Robinson as well), and the people there are very willing to talk about reliability -- I learnt about the heater-voltage-ramping thing by asking what the huge rheostat was for, and later seeing it turned on in the morning -- one of the problems with the reconstruction is that they are required to to turn it off at night (this is the same problem that afflicts people who look after steam locomotives. which in real life would almost never have been allowed to get cold).
As I said, I don't want to detract from Whirlwind in any way, but it is the case that Tommy Flowers did sort out significant aspects of the reliability of relatively large valve systems.
--tim
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
@ 2018-06-20 20:11 Doug McIlroy
2018-06-20 23:53 ` Tim Bradshaw
0 siblings, 1 reply; 73+ messages in thread
From: Doug McIlroy @ 2018-06-20 20:11 UTC (permalink / raw)
To: tuhs
> > Yet late in his life Forrester told me that the Whirlwind-connected
> > invention he was most proud of was marginal testing
> I'm totally gobsmacked to hear that. Margin testing was
> important, yes, but not even remotely on the same quantum
> level as core.
> In trying to understand why he said that, I can only suppose that he felt that
> core was 'the work of many hands'...and so he only deserved a share of the
> `credit for it.
It is indeed a striking comment. Forrester clearly had grave concerns
about the reliability of such a huge aggregation of electronics. I think
jpl gets to the core of the matter, regardless of national security:
> Whirlwind ... was tube based, and I think there was a tradeoff of speed,
> as determined by power, and tube longevity. Given the purpose, early
> warning of air attack, speed was vital, but so, too, was keeping it alive.
> So a means of finding a "sweet spot" was really a matter of national
> security. I can understand Forrester's pride in that context.
If you extrapolate the rate of replacement of vacuum tubes in a 5-tube
radio to a 5000-tube computer (say nothing of the 50,000-tube machines
for which Whirlwind served as a prototype), computing looks like a
crap shoot. In fact, thanks to the maintenance protocol, Whirlwind
computed reliably--a sine qua non for the nascent industry.
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
2018-06-20 20:11 Doug McIlroy
@ 2018-06-20 23:53 ` Tim Bradshaw
0 siblings, 0 replies; 73+ messages in thread
From: Tim Bradshaw @ 2018-06-20 23:53 UTC (permalink / raw)
To: Doug McIlroy; +Cc: tuhs
Making tube (valve) machines reliable was something originally sorted out by Tommy Flowers, who understood, and convinced people with some difficulty I think, that you could make a machine with one or two thousand valves (1,600 for Mk 1, 2,400 for Mk 2) reliable enough to use, and then produced ten or eleven Colossi from 1943 on which were used to great effect in. So by the time of Whirlwind it was presumably well-understood that this was possible.
(This is not meant to detract from what Whirlwind did in any way.)
> On 20 Jun 2018, at 21:11, Doug McIlroy <doug@cs.dartmouth.edu> wrote:
>
> If you extrapolate the rate of replacement of vacuum tubes in a 5-tube
> radio to a 5000-tube computer (say nothing of the 50,000-tube machines
> for which Whirlwind served as a prototype), computing looks like a
> crap shoot. In fact, thanks to the maintenance protocol, Whirlwind
> computed reliably--a sine qua non for the nascent industry.
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
@ 2018-06-19 12:23 Noel Chiappa
2018-06-19 12:58 ` Clem Cole
2018-06-21 14:09 ` Paul Winalski
0 siblings, 2 replies; 73+ messages in thread
From: Noel Chiappa @ 2018-06-19 12:23 UTC (permalink / raw)
To: tuhs; +Cc: jnc
> From: Doug McIlroy <doug@cs.dartmouth.edu>
> Core memory wiped out competing technologies (Williams tube, mercury
> delay line, etc) almost instantly and ruled for over twent years.
I never lived through that era, but reading about it, I'm not sure people now
can really fathom just how big a step forward core was - how expensive, bulky,
flaky, low-capacity, etc, etc prior main memory technologies were.
In other words, there's a reason they were all dropped like hot potatoes in
favour of core - which, looked at from our DRAM-era perspective, seems
quaintly dinosaurian. Individual pieces of hardware you can actually _see_
with the naked eye, for _each_ bit? But that should give some idea of how much
worse everything before it was, that it killed them all off so quickly!
There's simply no question that without core, computers would not have
advanced (in use, societal importance, technical depth, etc) at the speed they
did without core. It was one of the most consequential steps in the
development of computers to what they are today: up there with transistors,
ICs, DRAM and microprocessors.
> Yet late in his life Forrester told me that the Whirlwind-connected
> invention he was most proud of was marginal testing
Given the above, I'm totally gobsmacked to hear that. Margin testing was
important, yes, but not even remotely on the same quantum level as core.
In trying to understand why he said that, I can only suppose that he felt that
core was 'the work of many hands', which it was (see e.g. "Memories That
Shaped an Industry", pg. 212, and the things referred to there), and so he
only deserved a share of the credit for it.
Is there any other explanation? Did he go into any depth as to _why_ he felt
that way?
Noel
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
2018-06-19 12:23 Noel Chiappa
@ 2018-06-19 12:58 ` Clem Cole
2018-06-19 13:53 ` John P. Linderman
2018-06-21 14:09 ` Paul Winalski
1 sibling, 1 reply; 73+ messages in thread
From: Clem Cole @ 2018-06-19 12:58 UTC (permalink / raw)
To: Noel Chiappa; +Cc: The Eunuchs Hysterical Society
[-- Attachment #1: Type: text/plain, Size: 2092 bytes --]
On Tue, Jun 19, 2018 at 8:23 AM, Noel Chiappa <jnc@mercury.lcs.mit.edu>
wrote:
> > From: Doug McIlroy <doug@cs.dartmouth.edu>
>
> > Yet late in his life Forrester told me that the Whirlwind-connected
> > invention he was most proud of was marginal testing
>
> Given the above, I'm totally gobsmacked to hear that. Margin testing was
> important, yes, but not even remotely on the same quantum level as core.
Wow -- I had exactly the same reaction. To me, core was the second
most important invention (semiconductors switching being he first) for
making computing practical. I was thinking that systems must have been
really bad (worse than I knew) from a reliability stand point if he put
marginal testing up there as more important than core.
Like you, I thought core memory was pretty darned important. I never used
a system that had Williams tubes, although we had one in storage so I knew
what it looked like and knew how much more 'dense' core was compared to
it. Which is pretty amazing still compare today. For the modern user,
the IBM 360 a 1M core box (which we had 4) was made up of 4 19" relay
racks, each was about 54" high and 24" deep. If you go to
CMU Computer Photos from Chris Hausler
<http://www.silogic.com/Athena/CMU%20Photos%20from%20Chris%20Hausler.html>
and scroll down you can see some pictures of the old 360 (including a
copy of me in them circa 75/76 in front of it) to gage the size).
FWIW:
I broke in with MECL which Motorola invented / developed for IBM for System
360 and it (and TTL) were the first logic families I learned with which to
design. I remember the margin pots on the front of the 360 that we used
when we were trying to find weak gates, which happened about ones every 10
days.
The interesting part to me is that I'm suspect the PDP-10's and the Univac
1108 broke as often as the 360 did, but I have fewer memories of chasing
problems with them. Probably because it was a less of an issue that was
causing so many people to be disrupted by the 'down' time.
ᐧ
[-- Attachment #2: Type: text/html, Size: 4724 bytes --]
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
2018-06-19 12:58 ` Clem Cole
@ 2018-06-19 13:53 ` John P. Linderman
0 siblings, 0 replies; 73+ messages in thread
From: John P. Linderman @ 2018-06-19 13:53 UTC (permalink / raw)
To: Clem Cole; +Cc: The Eunuchs Hysterical Society, Noel Chiappa
[-- Attachment #1: Type: text/plain, Size: 2664 bytes --]
If I read the wikipedia entry for Whirlwind correctly (not a safe
assumption), it was tube based, and I think there was a tradeoff of speed,
as determined by power, and tube longevity. Given the purpose, early
warning of air attack, speed was vital, but so, too, was keeping it alive.
So a means of finding a "sweet spot" was really a matter of national
security. I can understand Forrester's pride in that context.
On Tue, Jun 19, 2018 at 8:58 AM, Clem Cole <clemc@ccc.com> wrote:
>
>
> On Tue, Jun 19, 2018 at 8:23 AM, Noel Chiappa <jnc@mercury.lcs.mit.edu>
> wrote:
>
>> > From: Doug McIlroy <doug@cs.dartmouth.edu>
>>
>> > Yet late in his life Forrester told me that the Whirlwind-connected
>> > invention he was most proud of was marginal testing
>>
>> Given the above, I'm totally gobsmacked to hear that. Margin testing was
>> important, yes, but not even remotely on the same quantum level as core.
>
> Wow -- I had exactly the same reaction. To me, core was the second
> most important invention (semiconductors switching being he first) for
> making computing practical. I was thinking that systems must have been
> really bad (worse than I knew) from a reliability stand point if he put
> marginal testing up there as more important than core.
>
> Like you, I thought core memory was pretty darned important. I never used
> a system that had Williams tubes, although we had one in storage so I knew
> what it looked like and knew how much more 'dense' core was compared to
> it. Which is pretty amazing still compare today. For the modern user,
> the IBM 360 a 1M core box (which we had 4) was made up of 4 19" relay
> racks, each was about 54" high and 24" deep. If you go to
> CMU Computer Photos from Chris Hausler
> <http://www.silogic.com/Athena/CMU%20Photos%20from%20Chris%20Hausler.html>
> and scroll down you can see some pictures of the old 360 (including a
> copy of me in them circa 75/76 in front of it) to gage the size).
>
>
>
> FWIW:
> I broke in with MECL which Motorola invented / developed for IBM for
> System 360 and it (and TTL) were the first logic families I learned with
> which to design. I remember the margin pots on the front of the 360 that
> we used when we were trying to find weak gates, which happened about ones
> every 10 days.
>
> The interesting part to me is that I'm suspect the PDP-10's and the Univac
> 1108 broke as often as the 360 did, but I have fewer memories of chasing
> problems with them. Probably because it was a less of an issue that was
> causing so many people to be disrupted by the 'down' time.
> ᐧ
>
[-- Attachment #2: Type: text/html, Size: 5642 bytes --]
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
2018-06-19 12:23 Noel Chiappa
2018-06-19 12:58 ` Clem Cole
@ 2018-06-21 14:09 ` Paul Winalski
1 sibling, 0 replies; 73+ messages in thread
From: Paul Winalski @ 2018-06-21 14:09 UTC (permalink / raw)
To: Noel Chiappa; +Cc: tuhs
On 6/19/18, Noel Chiappa <jnc@mercury.lcs.mit.edu> wrote:
>
> In other words, there's a reason they were all dropped like hot potatoes in
> favour of core - which, looked at from our DRAM-era perspective, seems
> quaintly dinosaurian. Individual pieces of hardware you can actually _see_
> with the naked eye, for _each_ bit? But that should give some idea of how
> much
> worse everything before it was, that it killed them all off so quickly!
When we replaced our S/360 model 25 with a S/370 model 125, I remember
being shocked when I found out that the semiconductor memory on the
125 was volatile. You mean we're going to have to reload the OS every
time we power the machine off? Yuck!!
-Paul W.
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
@ 2018-06-19 11:50 Doug McIlroy
0 siblings, 0 replies; 73+ messages in thread
From: Doug McIlroy @ 2018-06-19 11:50 UTC (permalink / raw)
To: tuhs
> As best I now recall, the concept was that instead of the namespace having a
root at the top, from which you had to allocate downward (and then recurse),
it built _upward_ - if two previously un-connected chunks of graph wanted to
unite in a single system, they allocated a new naming layer on top, in which
each existing system appeared as a constituent.
The Newcastle Connection (aka Unix United) implemented this idea.
Name spaces could be pasted together simply by making .. at the
roots point "up" to a new superdirectory. I do not remember whether
UIDS had to agree across the union (as in NFS) or were mapped (as
in RFS).
Doug
^ permalink raw reply [flat|nested] 73+ messages in thread
[parent not found: <20180618121738.98BD118C087@mercury.lcs.mit.edu>]
* Re: [TUHS] core
[not found] <20180618121738.98BD118C087@mercury.lcs.mit.edu>
@ 2018-06-18 21:13 ` Johnny Billquist
0 siblings, 0 replies; 73+ messages in thread
From: Johnny Billquist @ 2018-06-18 21:13 UTC (permalink / raw)
To: Noel Chiappa, ron, TUHS main list
On 2018-06-18 14:17, Noel Chiappa wrote:
> > The "separate" bus for the semiconductor memory is just a second Unibus
>
> Err, no. :-) There is a second UNIBUS, but... its source is a second port on
> the FASTBUS memory, the other port goes straight to the CPU. The other UNIBUS
> comes out of the CPU. It _is_ possible to join the two UNIBI together, but
> on machines which don't do that, the _only_ path from the CPU to the FASTBUS
> memory is via the FASTBUS.
Ah. You and Ron are right. I am confused.
So there were some previous PDP-11 models who did not have their memory
on the Unibus. The 11/45,50,55 accessed memory from the CPU not through
the Unibus, but through the fastbus, which was a pure memory bus, as far
as I understand. You (obviously) could also have memory on the Unibus,
but that would be slower then.
Ah, and there is a jumper to tell which addresses are served by the
fastbus, and the rest then go to the Unibus. Thanks, I had missed these
details before. (To be honest, I have never actually worked on any of
those machines.)
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt@softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
@ 2018-06-18 17:56 Noel Chiappa
2018-06-18 18:51 ` Clem Cole
0 siblings, 1 reply; 73+ messages in thread
From: Noel Chiappa @ 2018-06-18 17:56 UTC (permalink / raw)
To: tuhs; +Cc: jnc
> From: Clem Cole
> My experience is that more often than not, it's less a failure to see
> what a successful future might bring, and often one of well '*we don't
> need to do that now/costs too much/we don't have the time*.'
Right, which is why I later said a "successful architect has to pay _very_
close attention to both the 'here and now' (it has to be viable for
contemporary use, on contemporary hardware, with contemporary resources)".
They need to be sensitive to the (real and valid) concerns of people about who
are looking at today.
By the same token, though, the people you mention need to be sensitive to the
long-term picture. Too often they just blow it off, and focus _solely_ on
today. (I had a really bad experience with someone like that just before I
retired.) They need to understand, and accept, that that's just as serious an
error as an architect who doesn't care about 'workable today'.
In retrospect, I'm not sure you can fix people who are like that. I think the
only solution is to find an architect who _does_ respect the 'here and now',
and put them in control; that's the only way. IBM kinda-sorta did this with
the /360, and I think it showed/shows.
> I can imagine that he did not want to spend on anything that he thought
> was wasteful.
Understandable. But see above...
The art is in finding a path that leave the future open (i.e. reduces future
costs, when you 'hit the wall'), without running up costs now.
A great example is the QBUS 22-bit expansion - and I don't know if this was
thought out beforehand, or if they just lucked out. (Given that the expanded
address pins were not specifically reserved for that, probably the latter.
Sigh - even with the experience of the UNIBUS, they didn't learn!)
Anyway.. lots of Q18 devices (well, not DMA devices) work fine on Q22 because
of the BBS7 signal, which indicates an I/O device register is being looked
at. Without that, Q18 devices would have either i) had to incur the cost now
of more bus address line transceivers, or ii) stopped working when the bus was
upgraded to 22 address lines.
They managed to have their cake (fairly minimal costs now) and eat it too
(later expansion).
> Just like I retold the Amdahl/Brooks story of the 8-bit byte and Amdahl
> thinking Brooks was nuts
Don't think I've heard that one?
>> the decision to remove the variable-length addresses from IPv3 and
>> substitute the 32-bit addresses of IPv4.
> I always wondered about the back story on that one.
My understanding is that the complexity of variable-length address support
(which impacted TCP, as well as IP) was impacting the speed/schedule for
getting stuff done. Remember, it was a very small effort, code-writing
resources were limited, etc.
(I heard, at the time, from someone who was there, that one implementer was
overheard complaining to Vint about the number of pointer registers available
at interrupt time in a certain operating system. I don't think it was _just_
that, but rather the larger picture of the overall complexity cost.)
> 32-bits seemed infinite in those days and no body expected the network
> to scale to the size it is today and will grow to in the future
Yes, but like I said: they failed to ask themselves 'what are things going to
look like in 10 years if this thing is a success'? Heck, it didn't even last
10 years before they had to start kludging (adding A/B/C addresses)!
And ARP, well done as it is (its ability to handle just about any combo of
protocol and hardware addresses is because DCP and I saw eye-to-eye about
generality), is still a kludge. (Yes, yes, I know it's another binding layer,
and in some ways, another binding layer is never a bad thing, but...) The IP
architectural concept was to carry local hardware addresses in the low part of
the IP address. Once Ethernet came out, that was toast.
>> So, is poor vision common? All too common.
> But to be fair, you can also end up with being like DEC and often late
> to the market.
Gotta do a perfect job of balance on that knife edge - like an Olympic gymnast
on the beam...
This is particularly true with comm system architecture, which has about the
longest lifetime of _any_ system. If someone comes up with a new editor or OS
paradigm, people can convert to it if they want. But converting to a new
communication system - if you convert, you cut yourself off. A new one has to
be a _huge_ improvement over the older gear (as TCP/IP was) before conversion
makes sense.
So networking architects have to pay particularly strong attention to the long
term - or should, if they are to be any good.
> I think in both cases would have been allowed Alpha to be better
> accepted if DEC had shipped earlier with a few hacks, but them improved
> Tru64 as a better version was developed (*i.e.* replace the memory
> system, the I/O system, the TTY handler, the FS just to name a few that
> got rewritten from OSF/1 because folks thought they were 'weak').
But you can lose with that strategy too.
Multics had a lot of sub-systems re-written from the ground up over time, and
the new ones were always better (faster, more efficient) - a common even when
you have the experience/knowledge of the first pass.
Unfortunately, by that time it had the reputation as 'horribly slow and
inefficient', and in a lot of ways, never kicked that:
http://www.multicians.org/myths.html
Sigh, sometimes you can't win!
Noel
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
2018-06-18 17:56 Noel Chiappa
@ 2018-06-18 18:51 ` Clem Cole
0 siblings, 0 replies; 73+ messages in thread
From: Clem Cole @ 2018-06-18 18:51 UTC (permalink / raw)
To: Noel Chiappa; +Cc: The Eunuchs Hysterical Society
[-- Attachment #1: Type: text/plain, Size: 5390 bytes --]
On Mon, Jun 18, 2018 at 1:56 PM, Noel Chiappa <jnc@mercury.lcs.mit.edu>
wrote:
>
> > Just like I retold the Amdahl/Brooks story of the 8-bit byte and
> Amdahl
> > thinking Brooks was nuts
>
> Don't think I've heard that one?
Apologies for the repeat, if I have set this to TUHS before (I know I have
mentioned it in other places). Somebody else on this list mentioned David
Brailsford YouTube Video. Which has more details, biut he had some of not
quite right. I had watched it an we were communicating. The actual story
behind the byte is a bit deeper than he describes in the lecture, which he
thanked me - as I have since introduced him to my old friend & colleague
Russ Robelen who was was the chief designer of the Model 50 and later lead
the ASC system with John Coche working for him. Brailsford is right about
results of byte addressing and I did think his lecture is excellent and you
can learn a great deal. That said, Russ tells the details of story like
this:
Gene Amdahl wanted the byte to be 6 bits and felt that 8 bits was
'wasteful' of his hardware. Amdahl also did not see why more than 24 bits
for a word was really needed and most computations used words of course.
4, 6-bit bytes in a word seemed satisfactory to Gene. Fred Brooks kept
kicking Amdahl out of his office and told him flatly - that until he came
back with things that were power's of 2, don't bother - we couldn't program
it. The 32-bit word was a compromise, but note the original address was
only 24-bits (3, 8-bit bytes), although Brooks made sure all address
functions stored all 32-bits - which as Gordon Bell pointed out later was
the #1 thing that saved System 360 and made it last.
BTW:
Russ and
Ed Sussenguth invented speculative execution for the ACS system
a couple of years later
. I was
harassing him
last January,
because for 40 years we have been using his cool idea and it
came back to bite us at Intel. Here is the message cut/pasted from his
email for context:
"I see you are still leading a team at Intel
developing super computers and associated technologies. It certainly
is exciting times in high speed computing.
It brings back memories of my last work at IBM 50 years ago on the ACS
project. You know you are old when you publish in the IEEE Annals of
the History of Computing. One of the co-authors, Ed Sussenguth, passed
away before our paper was published
in
2016.
https://www.computer.og/csdl/mags/an/2016/01/man2016010060.html
<https://www.computer.org/csdl/mags/an/2016/01/man2016010060.html>
Some
of the work we did way back then has made the news in an unusual way
with the recent revelations on Spectre and Meltdown. I read the
‘Spectre Attacks: Exploiting Speculative Execution’ paper yesterday
trying to understand how speculative execution was being exploited.
At
ACS we were the first group at IBM to come up with the notion of the
Branch Table and other techniques for speeding up execution.
I wish you were closer. I’d do love to hear your views on the state of
computing today. I have a framed micrograph of the IBM Power 8 chip on
the wall in my office. In many ways the Power Series is an outgrowth
of ACS.
I still try to keep up with what is happening in my old field. The
recent advances by Google in Deep Learning are breathtaking to me.
Advances like AlphaGo Zero I never expected to see in my lifetime.
"
>
> But you can lose with that strategy too.
>
> Multics had a lot of sub-systems re-written from the ground up over time,
> and
> the new ones were always better (faster, more efficient) - a common even
> when
> you have the experience/knowledge of the first pass.
>
> Unfortunately, by that time it had the reputation as 'horribly slow and
> inefficient', and in a lot of ways, never kicked that:
>
> http://www.multicians.org/myths.html
>
> Sigh, sometimes you can't win!
Yep - although I think that may have been a case of economics. Multics
for all its great ideas, was just a huge system, when Moores law started to
make smaller systems possible. So I think your comment about thinking
about what you need now and what you will need in the future was part of
the issue.
I look at Multics vs Unix the same way I look at TNC vs Beowulf clusters.
At the time, we did TNC, we worked really hard to nail the transparency
thing and we did. It was (is) awesome. But it cost. Tru64 (VMS and
other SSI) style clusters are not around today. The hack that is Beowulf
is what lived on. The key is that it was good enough and for most people,
that extra work we did to get rid of those seams just was not worth it.
And in the because Beowulf was economically successful, things were
implemented for it, that were never even considered for Tru64 and the SSI
style systems. To me, Multics and Unix have the same history.
Multics was (is) cool; but Unix is similar; but different and too the
lead. The key is that it was not a straight path. Once Unix took over,
history went in a different direction.
Clem
[-- Attachment #2: Type: text/html, Size: 17304 bytes --]
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
@ 2018-06-18 14:51 Noel Chiappa
2018-06-18 14:58 ` Warner Losh
0 siblings, 1 reply; 73+ messages in thread
From: Noel Chiappa @ 2018-06-18 14:51 UTC (permalink / raw)
To: tuhs; +Cc: jnc
> From: Tony Finch <dot@dotat.at>
> Was this written down anywhere?
Alas, no. It was a presentation at a group seminar, and used either hand-drawn
transparencies, or a white-board - don't recall exactly which. I later tried to
dig it up for use in Nimrod, but without success.
As best I now recall, the concept was that instead of the namespace having a
root at the top, from which you had to allocate downward (and then recurse),
it built _upward_ - if two previously un-connected chunks of graph wanted to
unite in a single system, they allocated a new naming layer on top, in which
each existing system appeared as a constituent.
Or something like that! :-)
The issue with 'top-down' is that you have to have some global 'authority' to
manage the top level - hand out chunks, etc, etc. (For a spectacular example
of where this can go, look at NSAP's.) And what do you do when you run out of
top-level space? (Although in the NSAP case, they had such a complex top
couple of layers, they probably would have avoided that issue. Instead, they
had the problem that their name-space was spectacularly ill-suited to path
selection [routing], since in very large networks, interface names
[adddresses] must have a topological aspect if the path selection is to
scale. Although looking at the Internet nowadays, perhaps not!)
'Bottom-up' is not without problems of course (e.g. what if you want to add
another layer, e.g. to support potentially-nested virtual machines).
I'm not sure how well Dave understood the issue of path selection scaling at
the time he proposed it - it was very early on, '78 or so - since we didn't
understand path selection then as well as we do now. IIRC, I think he was
mostly was interested in it as a way to avoid having to have an asssignment
authority. The attraction for me was that it was easier to ensure that the
names had the needed topological aspect.
Noel
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
2018-06-18 14:51 Noel Chiappa
@ 2018-06-18 14:58 ` Warner Losh
0 siblings, 0 replies; 73+ messages in thread
From: Warner Losh @ 2018-06-18 14:58 UTC (permalink / raw)
To: Noel Chiappa; +Cc: The Eunuchs Hysterical Society
[-- Attachment #1: Type: text/plain, Size: 973 bytes --]
On Mon, Jun 18, 2018 at 8:51 AM, Noel Chiappa <jnc@mercury.lcs.mit.edu>
wrote:
> I'm not sure how well Dave understood the issue of path selection scaling
> at
> the time he proposed it - it was very early on, '78 or so - since we didn't
> understand path selection then as well as we do now. IIRC, I think he was
> mostly was interested in it as a way to avoid having to have an asssignment
> authority. The attraction for me was that it was easier to ensure that the
> names had the needed topological aspect.
I always thought the domain names were backwards, but that's because I
wanted command completion to work on them. I have not reevaluated this
view, though, since the very early days of the internet and I suppose
completion would be less useful than I'd originally supposed because the
cost to get the list is high or administratively blocked (tell me all the
domains that start with, in my notation, "com.home" when I hit <tab> in a
fast, easy way).
Warner
[-- Attachment #2: Type: text/html, Size: 1372 bytes --]
^ permalink raw reply [flat|nested] 73+ messages in thread
[parent not found: <mailman.1.1529287201.13697.tuhs@minnie.tuhs.org>]
* Re: [TUHS] core
[not found] <mailman.1.1529287201.13697.tuhs@minnie.tuhs.org>
@ 2018-06-18 5:58 ` Johnny Billquist
2018-06-18 12:39 ` Ronald Natalie
0 siblings, 1 reply; 73+ messages in thread
From: Johnny Billquist @ 2018-06-18 5:58 UTC (permalink / raw)
To: tuhs
On 2018-06-18 04:00, Ronald Natalie <ron@ronnatalie.com> wrote:
>> Well, it is an easy observable fact that before the PDP-11/70, all PDP-11 models had their memory on the Unibus. So it was not only "an ability at the lower end", b
> That’s not quite true. While the 18 bit addressed machines all had their memory directly accessible from the Unibus, the 45/50/55 had a separate bus for the (then new) semiconductor (bipolar or MOS) memory.
Eh... Yes... But...
The "separate" bus for the semiconductor memory is just a second Unibus,
so the statement is still true. All (earlier) PDP-11 models had their
memory on the Unibus. Including the 11/45,50,55.
It's just that those models have two Unibuses.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt@softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
2018-06-18 5:58 ` Johnny Billquist
@ 2018-06-18 12:39 ` Ronald Natalie
0 siblings, 0 replies; 73+ messages in thread
From: Ronald Natalie @ 2018-06-18 12:39 UTC (permalink / raw)
To: Johnny Billquist; +Cc: tuhs
> On Jun 18, 2018, at 1:58 AM, Johnny Billquist <bqt@update.uu.se> wrote:
>
> On 2018-06-18 04:00, Ronald Natalie <ron@ronnatalie.com> wrote:
>>> Well, it is an easy observable fact that before the PDP-11/70, all PDP-11 models had their memory on the Unibus. So it was not only "an ability at the lower end", b
>> That’s not quite true. While the 18 bit addressed machines all had their memory directly accessible from the Unibus, the 45/50/55 had a separate bus for the (then new) semiconductor (bipolar or MOS) memory.
>
> Eh... Yes... But...
> The "separate" bus for the semiconductor memory is just a second Unibus, so the statement is still true. All (earlier) PDP-11 models had their memory on the Unibus. Including the 11/45,50,55.
> It's just that those models have two Unibuses.
No, you are confusing different things I think. The fastbus (where the memory was) is a distinct bus. The fastbus was dual ported with the CPU in one port and a second UNIBUS could be connected to the other port.
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
@ 2018-06-17 21:18 Noel Chiappa
0 siblings, 0 replies; 73+ messages in thread
From: Noel Chiappa @ 2018-06-17 21:18 UTC (permalink / raw)
To: tuhs; +Cc: jnc
> From: Derek Fawcus <dfawcus+lists-tuhs@employees.org>
> my scan of it suggests that only the host part of the address which were
> extensible
Well, the division into 'net' and 'rest' does not appear to have been a hard
one at that point, as it later became.
> The other thing obviously missing in the IEN 28 version is the TTL
Yes... interesting!
> it has the DF flag in the TOS field, and an OP bit in the flags field
Yeah, small stuff like that got added/moved/removed around a lot.
> the CIDR vs A/B/C stuff didn't really change the rest.
It made packet processing in routers quite different; routing lookups, the
routing table, etc became much more complex (I remember that change)! Also in
hosts, which had not yet had their understanding of fields in the addresses
lobotomized away (RFC-1122, Section 3.3.1).
Yes, the impact on code _elsewhere_ in the stack was minimal, because the
overall packet format didn't change, and addresses were still 32 bits, but...
> The other bit I find amusing are the various movements of the port
> numbers
Yeah, there was a lot of discussion about whether they were properly part of
the internetwork layer, or the transport. I'm not sure there's really a 'right'
answer; PUP:
http://gunkies.org/wiki/PARC_Universal_Packet
made them part of the internetwork header, and seemed to do OK.
I think we eventually decided that we didn't want to mandate a particular port
name size across all transports, and moved it out. This had the down-side that
there are some times when you _do_ want to have the port available to an
IP-only device, which is why ICMP messages return the first N bytes of the
data _after_ the IP header (since it's not clear where the port field[s] will
be).
But I found, working with PUP, there were some times when the defined ports
didn't always make sense with some protocols (although PUP didn't really have
a 'protocol' field per se); the interaction of 'PUP type' and 'socket' could
sometimes be confusing/problemtic. So I personally felt that was at least as
good a reason to move them out. 'Ports' make no sense for routing protocols,
etc.
Overall, I think in the end, TCP/IP got that all right - the semantics of the
'protocol' field are clear and simple, and ports in the transport layer have
worked well; I can't think of any places (other than routers which want to
play games with connections) where not having ports in the internetwork layer
has been an issue.
Noel
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
@ 2018-06-17 17:58 Noel Chiappa
2018-06-18 9:16 ` Tim Bradshaw
0 siblings, 1 reply; 73+ messages in thread
From: Noel Chiappa @ 2018-06-17 17:58 UTC (permalink / raw)
To: tuhs; +Cc: jnc
> From: "Theodore Y. Ts'o"
> To be fair, it's really easy to be wise to after the fact.
Right, which is why I added the caveat "seen to be such _at the time_ by some
people - who were not listened to".
> failed protocols and designs that collapsed of their own weight because
> architects added too much "maybe it will be useful in the future"
And there are also designs which failed because their designers were too
un-ambitious! Converting to a new system has a cost, and if the _benefits_
(which more or less has to mean new capabilities) of the new thing don't
outweigh the costs of conversion, it too will be a failure.
> Sometimes having architects being successful to add their "vision" to a
> product can be worst thing that ever happened
A successful architect has to pay _very_ close attention to both the 'here and
now' (it has to be viable for contemporary use, on contemporary hardware, with
contemporary resources), and also the future (it has to have 'room to grow').
It's a fine edge to balance on - but for an architecture to be a great
success, it _has_ to be done.
> The problem is it's hard to figure out in advance which is poor vision
> versus brilliant engineering to cut down the design so that it is "as
> simple as possible", but nevertheless, "as complex as necessary".
Absolutely. But it can be done. Let's look (as an example) at that IPv3->IPv4
addressing decision.
One of two things was going to be true of the 'Internet' (that name didn't
exist then, but it's a convenient tag): i) It was going to be a failure (in
which case, it probably didn't matter what was done, or ii) it was going to be
a success, in which case that 32-bit field was clearly going to be a crippling
problem.
With that in hand, there was no excuse for that decision.
I understand why they ripped out variable-length addresses (I was just about
to start writing router code, and I know how hard it would have been), but in
light of the analysis immediately above, there was no excuse for looking
_only_ at the here and now, and not _also_ looking to the future.
Noel
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
2018-06-17 17:58 Noel Chiappa
@ 2018-06-18 9:16 ` Tim Bradshaw
2018-06-18 15:33 ` Tim Bradshaw
0 siblings, 1 reply; 73+ messages in thread
From: Tim Bradshaw @ 2018-06-18 9:16 UTC (permalink / raw)
To: Noel Chiappa; +Cc: tuhs
[-- Attachment #1: Type: text/plain, Size: 773 bytes --]
On 17 Jun 2018, at 18:58, Noel Chiappa <jnc@mercury.lcs.mit.edu> wrote:
>
> One of two things was going to be true of the 'Internet' (that name didn't
> exist then, but it's a convenient tag): i) It was going to be a failure (in
> which case, it probably didn't matter what was done, or ii) it was going to be
> a success, in which case that 32-bit field was clearly going to be a crippling
> problem.
There's a third possibility: if implementation of the specification is too complicated it will fail, while if the implementation is simple enough it may succeed, in which case the specification which allowed the simple implementation will eventually become a problem.
I don't know whether IP falls into that category, but I think a lot of things do.
--tim
[-- Attachment #2: Type: text/html, Size: 4128 bytes --]
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
2018-06-18 9:16 ` Tim Bradshaw
@ 2018-06-18 15:33 ` Tim Bradshaw
0 siblings, 0 replies; 73+ messages in thread
From: Tim Bradshaw @ 2018-06-18 15:33 UTC (permalink / raw)
To: Noel Chiappa; +Cc: tuhs
> On 18 Jun 2018, at 15:33, Noel Chiappa <jnc@mercury.lcs.mit.edu> wrote:
>
> When you say "if implementation of the specification is too complicated it
> will fail", I think you mean 'the specification is so complex that _any_
> implementation must _necessarily_ be so complex that all the implementations,
> and thus the specification itself, will fail'?
Yes, except that I don't think the specification itself has to be complex, it just needs to require implementations be hard, computationally expensive, or both. For instance a specification for integer arithmetic in a programming language which required that (a/b)*b be a except where b is zero isn't complicated, I think, but it requires implementations which are either hard, computationally-expensive or both.
>
> And for "in which case the specification which allowed the simple
> implementation will eventually become a problem", I think that's part of your
> saying 'you get to pick your poison; either the spec is so complicated it
> fails right away, or if it's simple enough to succeed in the short term, it
> will neccessarily fail in the long term'?
Yes, with the above caveat about the spec not needing to be complicated, and I'd say something like 'become limiting in the long term' rather than 'fail in the long term'.
So I think your idea was that, if there's a choice between a do-the-right-thing, future-proof (variable-length-address field) solution or a will-work-for-now, implementationally-simpler (fixed-length) solution, then you should do the right thing, because if the thing fails it makes no odds and if it succeeds then you will regret the second solution. My caveat is that success (in the sense of wide adoption) is not independent of which solution you pick, and in particular that will-work-for-now solutions are much more likely to succeed than do-the-right-thing ones.
As I said, I don't know whether this applies to IP.
--tim
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
@ 2018-06-17 14:36 Noel Chiappa
2018-06-17 15:58 ` Derek Fawcus
0 siblings, 1 reply; 73+ messages in thread
From: Noel Chiappa @ 2018-06-17 14:36 UTC (permalink / raw)
To: tuhs; +Cc: jnc
> From: Derek Fawcus
> Are you able to point to any document which still describes that
> variable length scheme? I see that IEN 28 defines a variable length
> scheme (using version 2)
That's the one; Version 2 of IP, but it was for Version 3 of TCP (described
here: IEN-21, Cerf, "TCP 3 Specification", Jan-78 ).
> and that IEN 41 defines a different variable length scheme, but is
> proposing to use version 4.
Right, that's a draft only (no code ever written for it), from just before the
meeting that substituted 32-bit addresses.
> (IEN 44 looks a lot like the current IPv4).
Because it _is_ the current IPv4 (well, modulo the class A/B/C addressing
stuff). :-)
Noel
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
2018-06-17 14:36 Noel Chiappa
@ 2018-06-17 15:58 ` Derek Fawcus
0 siblings, 0 replies; 73+ messages in thread
From: Derek Fawcus @ 2018-06-17 15:58 UTC (permalink / raw)
To: Noel Chiappa; +Cc: tuhs
On Sun, Jun 17, 2018 at 10:36:10AM -0400, Noel Chiappa wrote:
> > From: Derek Fawcus
>
> > Are you able to point to any document which still describes that
> > variable length scheme? I see that IEN 28 defines a variable length
> > scheme (using version 2)
>
> That's the one; Version 2 of IP, but it was for Version 3 of TCP (described
> here: IEN-21, Cerf, "TCP 3 Specification", Jan-78 ).
Ah - thanks.
So my scan of it suggests that only the host part of the address which were
extensible, but then I guess the CIDR scheme could have eventually been applied
to that portion.
The other thing obviously missing in the IEN 28 version is the TTL (which
appeared by IEN 41).
> > and that IEN 41 defines a different variable length scheme, but is
> > proposing to use version 4.
>
> Right, that's a draft only (no code ever written for it), from just before the
> meeting that substituted 32-bit addresses.
>
> > (IEN 44 looks a lot like the current IPv4).
>
> Because it _is_ the current IPv4 (well, modulo the class A/B/C addressing
> stuff). :-)
I wrote 'a lot', because it has the DF flag in the TOS field, and an OP bit
in the flags field; the CIDR vs A/B/C stuff didn't really change the rest.
But yeah - essentially what we still use now. Now 40 years and still going.
The other bit I find amusing are the various movements of the port numbers,
obviously they were originally part of the combined header (e.g. IEN 26),
then the IEN 21 TCP has them in the middle of its header, by IEN 44 it is
sort of its own header split from the TCP header which now omits ports.
Eventually they end up in the current location, as part of the start of the
TCP header (and UDP, etc), essentially combining that 'Port Header' with
whatever transport follows.
DF
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
@ 2018-06-16 22:57 Noel Chiappa
0 siblings, 0 replies; 73+ messages in thread
From: Noel Chiappa @ 2018-06-16 22:57 UTC (permalink / raw)
To: tuhs; +Cc: jnc
> From: Johnny Billquist
> incidentally have 18 data bits, but that is mostly ignored by all
> systems. I believe the KS-10 made use of that, though. And maybe the
> PDP-15.
The 18-bit data thing is a total kludge; they recycled the two bus parity
lines as data lines.
The first device that I know of that used it is the RK11-E:
http://gunkies.org/wiki/RK11_disk_controller#RK11-E
which is the same cards as the RK11-D, with a jumper set for 18-bit operation,
and a different clock crystal. The other UNIBUS interface that could do this
was the RH11 MASSBUS controller. Both were originally done for the PDP-15;
they were used with the UC15 Unichannel.
The KS10:
http://gunkies.org/wiki/KS10
wound up using the 18-bit RH11 hack, but that was many years later.
Noel
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
@ 2018-06-16 13:49 Noel Chiappa
2018-06-16 14:10 ` Clem Cole
2018-06-16 14:10 ` Steve Nickolas
0 siblings, 2 replies; 73+ messages in thread
From: Noel Chiappa @ 2018-06-16 13:49 UTC (permalink / raw)
To: tuhs; +Cc: jnc
> From: Clem Cole
> The 8 pretty much had a base price in the $30k range in the mid to late
> 60s.
His statement was made in 1977 (ironically, the same year as the Apple
II).
(Not really that relevant, since he was apparently talking about 'smart
homes'; still, the history of DEC and personal computers is not a happy one;
perhaps why that quotation was taken up.)
> Later models used TTL and got down to a single 3U 'drawer'.
There was eventually a single-chip micro version, done in the mid-70's; it
was used in a number of DEC word-processing products.
Noel
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
2018-06-16 13:49 Noel Chiappa
@ 2018-06-16 14:10 ` Clem Cole
2018-06-16 14:34 ` Lawrence Stewart
2018-06-16 14:10 ` Steve Nickolas
1 sibling, 1 reply; 73+ messages in thread
From: Clem Cole @ 2018-06-16 14:10 UTC (permalink / raw)
To: Noel Chiappa; +Cc: The Eunuchs Hysterical Society
[-- Attachment #1: Type: text/plain, Size: 1311 bytes --]
Thanks, I thought it was about 10 years earlier. It means that the 16
bit systems were definitely the norm and the 32 bit system were well under
design and the micros already birthed. That said, as I pointed out in my
paper last summer, in 1977, a PDP-11 that was able to run UNIX (11/34 with
max memory) ran between $50-150K depending how it was configured and an
11/70 was closer to $250K. To scale, In 2017 dollars, we calculated that
comes to $208K/$622K/$1M and as I also pointed out, a graduate researcher
in those days cost about $5-$10K per year.
ᐧ
On Sat, Jun 16, 2018 at 9:49 AM, Noel Chiappa <jnc@mercury.lcs.mit.edu>
wrote:
> > From: Clem Cole
>
> > The 8 pretty much had a base price in the $30k range in the mid to
> late
> > 60s.
>
> His statement was made in 1977 (ironically, the same year as the Apple
> II).
>
> (Not really that relevant, since he was apparently talking about 'smart
> homes'; still, the history of DEC and personal computers is not a happy
> one;
> perhaps why that quotation was taken up.)
>
> > Later models used TTL and got down to a single 3U 'drawer'.
>
> There was eventually a single-chip micro version, done in the mid-70's; it
> was used in a number of DEC word-processing products.
>
> Noel
>
[-- Attachment #2: Type: text/html, Size: 2217 bytes --]
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
2018-06-16 14:10 ` Clem Cole
@ 2018-06-16 14:34 ` Lawrence Stewart
2018-06-16 15:28 ` Larry McVoy
0 siblings, 1 reply; 73+ messages in thread
From: Lawrence Stewart @ 2018-06-16 14:34 UTC (permalink / raw)
To: Clem Cole; +Cc: The Eunuchs Hysterical Society, Noel Chiappa
[-- Attachment #1: Type: text/plain, Size: 901 bytes --]
5-10K for a grad student seems low for the late ‘70s. I was an RA at Stanford then and they paid about $950/month. The loaded cost would have been about twice that.
My lab had an 11/34 with V7 and we considered ourselves quite well off.
-L
> On 2018, Jun 16, at 10:10 AM, Clem Cole <clemc@ccc.com> wrote:
>
> Thanks, I thought it was about 10 years earlier. It means that the 16 bit systems were definitely the norm and the 32 bit system were well under design and the micros already birthed. That said, as I pointed out in my paper last summer, in 1977, a PDP-11 that was able to run UNIX (11/34 with max memory) ran between $50-150K depending how it was configured and an 11/70 was closer to $250K. To scale, In 2017 dollars, we calculated that comes to $208K/$622K/$1M and as I also pointed out, a graduate researcher in those days cost about $5-$10K per year.
>
> ᐧ
[-- Attachment #2: Type: text/html, Size: 1971 bytes --]
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
2018-06-16 14:34 ` Lawrence Stewart
@ 2018-06-16 15:28 ` Larry McVoy
0 siblings, 0 replies; 73+ messages in thread
From: Larry McVoy @ 2018-06-16 15:28 UTC (permalink / raw)
To: Lawrence Stewart; +Cc: The Eunuchs Hysterical Society, Noel Chiappa
I was a grad student at UWisc in 1986 and got paid $16K/year.
On Sat, Jun 16, 2018 at 10:34:00AM -0400, Lawrence Stewart wrote:
> 5-10K for a grad student seems low for the late ???70s. I was an RA at Stanford then and they paid about $950/month. The loaded cost would have been about twice that.
>
> My lab had an 11/34 with V7 and we considered ourselves quite well off.
>
> -L
>
> > On 2018, Jun 16, at 10:10 AM, Clem Cole <clemc@ccc.com> wrote:
> >
> > Thanks, I thought it was about 10 years earlier. It means that the 16 bit systems were definitely the norm and the 32 bit system were well under design and the micros already birthed. That said, as I pointed out in my paper last summer, in 1977, a PDP-11 that was able to run UNIX (11/34 with max memory) ran between $50-150K depending how it was configured and an 11/70 was closer to $250K. To scale, In 2017 dollars, we calculated that comes to $208K/$622K/$1M and as I also pointed out, a graduate researcher in those days cost about $5-$10K per year.
> >
> > ???
>
--
---
Larry McVoy lm at mcvoy.com http://www.mcvoy.com/lm
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
2018-06-16 13:49 Noel Chiappa
2018-06-16 14:10 ` Clem Cole
@ 2018-06-16 14:10 ` Steve Nickolas
2018-06-16 14:13 ` William Pechter
1 sibling, 1 reply; 73+ messages in thread
From: Steve Nickolas @ 2018-06-16 14:10 UTC (permalink / raw)
To: Noel Chiappa; +Cc: tuhs
On Sat, 16 Jun 2018, Noel Chiappa wrote:
> > From: Clem Cole
>
> > The 8 pretty much had a base price in the $30k range in the mid to late
> > 60s.
>
> His statement was made in 1977 (ironically, the same year as the Apple
> II).
>
> (Not really that relevant, since he was apparently talking about 'smart
> homes'; still, the history of DEC and personal computers is not a happy one;
> perhaps why that quotation was taken up.)
>
> > Later models used TTL and got down to a single 3U 'drawer'.
>
> There was eventually a single-chip micro version, done in the mid-70's; it
> was used in a number of DEC word-processing products.
>
> Noel
>
Wasn't a one-chip PDP11 used by Tengen in a few arcade games, like
Paperboy?
-uso.
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
2018-06-16 14:10 ` Steve Nickolas
@ 2018-06-16 14:13 ` William Pechter
0 siblings, 0 replies; 73+ messages in thread
From: William Pechter @ 2018-06-16 14:13 UTC (permalink / raw)
To: tuhs, Steve Nickolas, Noel Chiappa; +Cc: tuhs
[-- Attachment #1: Type: text/plain, Size: 1039 bytes --]
Yup. IIRC they used the T11 -- the same chip in the Vax86xx front end.
Bill
On June 16, 2018 10:10:43 AM EDT, Steve Nickolas <usotsuki@buric.co> wrote:
>On Sat, 16 Jun 2018, Noel Chiappa wrote:
>
>> > From: Clem Cole
>>
>> > The 8 pretty much had a base price in the $30k range in the mid
>to late
>> > 60s.
>>
>> His statement was made in 1977 (ironically, the same year as the
>Apple
>> II).
>>
>> (Not really that relevant, since he was apparently talking about
>'smart
>> homes'; still, the history of DEC and personal computers is not a
>happy one;
>> perhaps why that quotation was taken up.)
>>
>> > Later models used TTL and got down to a single 3U 'drawer'.
>>
>> There was eventually a single-chip micro version, done in the
>mid-70's; it
>> was used in a number of DEC word-processing products.
>>
>> Noel
>>
>
>Wasn't a one-chip PDP11 used by Tengen in a few arcade games, like
>Paperboy?
>
>-uso.
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
[-- Attachment #2: Type: text/html, Size: 1850 bytes --]
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
@ 2018-06-16 13:37 Noel Chiappa
2018-06-16 18:59 ` Clem Cole
` (3 more replies)
0 siblings, 4 replies; 73+ messages in thread
From: Noel Chiappa @ 2018-06-16 13:37 UTC (permalink / raw)
To: tuhs; +Cc: jnc
> From: Dave Horsfall
> idiots keep repeating those "quotes" ... in some sort of an effort to
> make the so-called "experts" look silly; a form of reverse
> jealousy/snobbery or something? It really pisses me off
You've just managed to hit one of my hot buttons.
I can't speak to the motivations of everyone who repeats these stories, but my
professional career has been littered with examples of poor vision from
technical colleagues (some of whom should have known better), against which I
(in my role as an architect, which is necessarily somewhere where long-range
thinking is - or should be - a requirement) have struggled again and again -
sometimes successfully, more often, not.
So I chose those two only because they are well-known examples - but, as you
correctly point out, they are poor examples, for a variety of reasons. But
they perfectly illustrate something I am _all_ too familiar with, and which
happens _a lot_. And the original situation I was describing (the MIT core
patent) really happened - see "Memories that Shaped an Industry", page 211.
Examples of poor vision are legion - and more importantly, often/usually seen
to be such _at the time_ by some people - who were not listened to.
Let's start with the UNIBUS. Why does it have only 18 address lines? (I have
this vague memory of a quote from Gordon Bell admitting that was a mistake,
but I don't recall exactly where I saw it.) That very quickly became a major
limitation. I'm not sure why they did it (the number of contact on a standard
DEC connector probably had something do with it, connected to the fact that
the first PDP-11 had only 16-bit addressing anyway), but it should have been
obvious that it was not enough.
And a major one from the very start of my career: the decision to remove the
variable-length addresses from IPv3 and substitute the 32-bit addresses of
IPv4. Alas, I was too junior to be in the room that day (I was just down the
hall), but I've wished forever since that I was there to propose an alternate
path (the details of which I will skip) - that would have saved us all
countless billions (no, I am not exaggerating) spent on IPv6. Dave Reed was
pretty disgusted with that at the time, and history shows he was right.
Dave also tried to get a better checksum algorithm into TCP/UDP (I helped him
write the PDP-11 code to prove that it was plausible) but again, it got turned
down. (As did his wonderful idea for bottom-up address allocation, instead of
top-down. Oh well.) People have since discussed issues with the TCP/IP
checksum, but it's too late now to change it.
One place where I _did_ manage to win was in adding subnetting support to
hosts (in the Host Requirements WG); it was done the way I wanted, with the
result that when CIDR came along, even though it hadn't been forseen at the
time we did subnetting, it required _no_ hosts changes of any kind. But
mostly I lost. :-(
So, is poor vision common? All too common.
Noel
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
2018-06-16 13:37 Noel Chiappa
@ 2018-06-16 18:59 ` Clem Cole
2018-06-17 12:15 ` Derek Fawcus
` (2 subsequent siblings)
3 siblings, 0 replies; 73+ messages in thread
From: Clem Cole @ 2018-06-16 18:59 UTC (permalink / raw)
To: Noel Chiappa; +Cc: The Eunuchs Hysterical Society
[-- Attachment #1: Type: text/plain, Size: 8352 bytes --]
below...
On Sat, Jun 16, 2018 at 9:37 AM, Noel Chiappa <jnc@mercury.lcs.mit.edu>
wrote:
>
>
> I can't speak to the motivations of everyone who repeats these stories,
> but my
> professional career has been littered with examples of poor vision from
> technical colleagues (some of whom should have known better), against
> which I
> (in my role as an architect, which is necessarily somewhere where
> long-range
> thinking is - or should be - a requirement) have struggled again and again
> -
> sometimes successfully, more often, not.
>
Amen, although sadly many of us if not all of us have a few of these
stories. In fact, I'm fighting another one of these battles right now.🤔
My experience is that more often than not, it's less a failure to see what
a successful future might bring, and often one of well '*we don't need to
do that now/costs too much/we don't have the time*.'
That said, DEC was the epitome of the old line about perfection being the
enemy of success. I like to say to my colleagues, pick the the things
that are going to really matter. Make those perfect and bet the company on
them. But think in terms of what matters. As you point out, address size
issues are killers and you need to get those right at time t0.
Without saying too much, many firms like my own, think in terms of
computation (math libraries, cpu kernels), but frankly if I can not get the
data to/from CPU's functional units, or the data is stored in the wrong
place, or I have much of the main memory tied up in the OS managing
different types of user memory; it doesn't matter [HPC customers in
particular pay for getting a job done -- they really don't care how -- just
get it done and done fast].
To me, it becomes a matter of 'value' -- our HW folks know a crappy
computational system will doom the device, so that is what they put there
effort into building. My argument has often been that the messaging
systems, memory hierarchy and house keeping are what you have to get right
at this point. No amount of SW will fix HW that is lacking the right
support in those places (not that lots of computes are bad, but they are
actually not the big issue in the HPC when yet get down to it these days).
>
> Let's start with the UNIBUS. Why does it have only 18 address lines? (I
> have
> this vague memory of a quote from Gordon Bell admitting that was a mistake,
> but I don't recall exactly where I saw it.)
I think it was part of the same paper where he made the observation that
the greatest mistake an architecture can have is too few address bits.
My understanding is that the problem was that UNIBUS was perceived as an
I/O bus and as I was pointing out, the folks creating it/running the team
did not value it, so in the name of 'cost', more bits was not considered
important.
I used to know and work with the late Henk Schalke, who ran Unibus (HW)
engineering at DEC for many years. Henk was notoriously frugal (we might
even say 'cheap'), so I can imagine that he did not want to spend on
anything that he thought was wasteful. Just like I retold the
Amdahl/Brooks story of the 8-bit byte and Amdahl thinking Brooks was nuts;
I don't know for sure, but I can see that without someone really arguing
with Henk as to why 18 bits was not 'good enough.' I can imagine the
conversation going something like: Someone like me saying: *"Henk, 18 bits
is not going to cut it."* He might have replied something like: *"Bool
sheet *[a dutchman's way of cursing in English], *we already gave you two
more bit than you can address* (actually he'd then probably stop mid
sentence and translate in his head from Dutch to English - which was always
interesting when you argued with him).
Note: I'm not blaming Henk, just stating that his thinking was very much
that way, and I suspect he was not not alone. Only someone like Gordon and
the time could have overruled it, and I don't think the problems were
foreseen as Noel notes.
>
> And a major one from the very start of my career: the decision to remove
> the
> variable-length addresses from IPv3 and substitute the 32-bit addresses of
> IPv4.
>
I always wondered about the back story on that one. I do seem to remember
that there had been a proposal for variable-length addresses at one point;
but never knew why it was not picked. As you say, I was certainly way to
junior to have been part of that discussion. We just had some of the
document from you guys and we were told to try to implement it. My guess
is this is an example of folks thinking, length addressing was wasteful.
32-bits seemed infinite in those days and no body expected the network to
scale to the size it is today and will grow to in the future [I do remember
before Noel and team came up with ARP, somebody quipped that Xerox
Ethernet's 48-bits were too big and IP's 32-bit was too small. The
original hack I did was since we used 3Com board and they all shared the
upper 3 bytes of the MAC address to map the lower 24 to the IP address - we
were not connecting to the global network so it worked. Later we used a
look up table, until the ARP trick was created].
>
> One place where I _did_ manage to win was in adding subnetting support to
> hosts (in the Host Requirements WG); it was done the way I wanted, with the
> result that when CIDR came along, even though it hadn't been forseen at the
> time we did subnetting, it required _no_ hosts changes of any kind.
Amen and thank you.
> But
>
> mostly I lost. :-(
>
I know the feeling. To many battles that in hindsight you think - darn if
they had only listened. FWIW: if you try to mess with Intel OPA2 fabric
these days there is a back story. A few years ago I had a quite a battle
with the HW folks, but I won that one. The SW cannot tell the difference
between on-die or off-die, so the OS does not have the manage it. Huge
difference in OS performance and space efficiency. But I suspect that
there are some HW folks that spit on the floor when I come in the room.
We'll see if I am proven to be right in the long run; but at 1M cores I
don't want to think of the OS mess to manage two different types of memory
for the message system.
>
> So, is poor vision common? All too common.
Indeed. But to be fair, you can also end up with being like DEC and often
late to the market.
M
y example is of Alpha (and 64-bits vs 32-bit). No attempt to support
32-bit was really done because 64-bit was the future. Sr folks considered
32-bit mode was wasteful. The argument was that adding it was not only
technically not a good idea, but it would suck up engineering resources to
implement it in both HW and SW. Plus, we coming from the VAX so folks had
to recompile all the code anyway (god forbid that the SW might not be
64-bit clean mind you). [VMS did a few hacks, but Tru64 stayed 'clean.']
Similarly (back to UNIX theme for this mailing list) Tru64 was a rewrite
of OSF/1 - but hardly any OSF code was left in the DEC kernel by the time
it shipped. Each subsystem was rewritten/replaced to 'make it perfect'
[always with a good argument mind you, but never looking at the long term
issues]. Those two choices cost 3 years in market acceptance. By the
time, Alphas hit the street it did not matter. I think in both cases would
have been allowed Alpha to be better accepted if DEC had shipped earlier
with a few hacks, but them improved Tru64 as a better version was developed
(*i.e.* replace the memory system, the I/O system, the TTY handler, the FS
just to name a few that got rewritten from OSF/1 because folks thought they
were 'weak').
The trick in my mind, is to identify the real technical features you can
not fix later and get those right at the beginning. Then place the bet on
those features, and develop as fast as you can and do the best with them as
you you are able given your constraints. Then slowly over time improve the
things that mattered less at the beginning as you have a review stream.
If you wait for perfection, you get something like Alpha which was a great
architecture (particularly compared to INTEL*64) - but in the end, did not
matter.
Clem
ᐧ
[-- Attachment #2: Type: text/html, Size: 13046 bytes --]
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
2018-06-16 13:37 Noel Chiappa
2018-06-16 18:59 ` Clem Cole
@ 2018-06-17 12:15 ` Derek Fawcus
2018-06-17 17:33 ` Theodore Y. Ts'o
2018-06-18 12:36 ` Tony Finch
3 siblings, 0 replies; 73+ messages in thread
From: Derek Fawcus @ 2018-06-17 12:15 UTC (permalink / raw)
To: Noel Chiappa; +Cc: tuhs
On Sat, Jun 16, 2018 at 09:37:16AM -0400, Noel Chiappa wrote:
>
> And a major one from the very start of my career: the decision to remove the
> variable-length addresses from IPv3 and substitute the 32-bit addresses of
> IPv4.
Are you able to point to any document which still describes that variable length scheme?
I see that IEN 28 defines a variable length scheme (using version 2),
and that IEN 41 defines a different variable length scheme,
but is proposing to use version 4.
(IEN 44 looks a lot like the current IPv4).
DF
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
2018-06-16 13:37 Noel Chiappa
2018-06-16 18:59 ` Clem Cole
2018-06-17 12:15 ` Derek Fawcus
@ 2018-06-17 17:33 ` Theodore Y. Ts'o
2018-06-17 19:50 ` Jon Forrest
2018-06-18 14:56 ` Clem Cole
2018-06-18 12:36 ` Tony Finch
3 siblings, 2 replies; 73+ messages in thread
From: Theodore Y. Ts'o @ 2018-06-17 17:33 UTC (permalink / raw)
To: Noel Chiappa; +Cc: tuhs
On Sat, Jun 16, 2018 at 09:37:16AM -0400, Noel Chiappa wrote:
> I can't speak to the motivations of everyone who repeats these stories, but my
> professional career has been littered with examples of poor vision from
> technical colleagues (some of whom should have known better), against which I
> (in my role as an architect, which is necessarily somewhere where long-range
> thinking is - or should be - a requirement) have struggled again and again -
> sometimes successfully, more often, not....
>
> Examples of poor vision are legion - and more importantly, often/usually seen
> to be such _at the time_ by some people - who were not listened to.
To be fair, it's really easy to be wise to after the fact. Let's
start with Unix; Unix is very bare-bones, when other OS architects
wanted to add lots of features that were spurned for simplicity's
sake. Or we could compare X.500 versus LDAP, and X.400 and SMTP.
It's easy to mock decisions that weren't forward-thinking enough; but
it's also really easy to mock failed protocols and designs that
collapsed of their own weight because architects added too much "maybe
it will be useful in the future".
The architects that designed the original (never shipped) Microsoft
Vista thought they had great "vision". Unfortunately they added way
too much complexity and overhead to an operating system just in time
to watch the failure of Moore's law to be able to support all of that
overhead. Adding a database into the kernel and making it a
fundamental part of the file system? OK, stupid? How about adding
all sorts of complexity in VMS and network protocols to support
record-oriented files?
Sometimes having architects being successful to add their "vision" to
a product can be worst thing that ever happened to a operating sytsem
or, say, the entire OSI networking protocol suite.
> So, is poor vision common? All too common.
Definitely. The problem is it's hard to figure out in advance which
is poor vision versus brilliant engineering to cut down the design so
that it is "as simple as possible", but nevertheless, "as complex as
necessary".
- Ted
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
2018-06-17 17:33 ` Theodore Y. Ts'o
@ 2018-06-17 19:50 ` Jon Forrest
2018-06-18 14:56 ` Clem Cole
1 sibling, 0 replies; 73+ messages in thread
From: Jon Forrest @ 2018-06-17 19:50 UTC (permalink / raw)
To: tuhs
On 6/17/2018 10:33 AM, Theodore Y. Ts'o wrote:
> The architects that designed the original (never shipped) Microsoft
> Vista thought they had great "vision".
There's a very fine boundary between "vision" and "hallucination".
Jon
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
2018-06-17 17:33 ` Theodore Y. Ts'o
2018-06-17 19:50 ` Jon Forrest
@ 2018-06-18 14:56 ` Clem Cole
1 sibling, 0 replies; 73+ messages in thread
From: Clem Cole @ 2018-06-18 14:56 UTC (permalink / raw)
To: Theodore Y. Ts'o; +Cc: The Eunuchs Hysterical Society, Noel Chiappa
[-- Attachment #1: Type: text/plain, Size: 3493 bytes --]
On Sun, Jun 17, 2018 at 1:33 PM, Theodore Y. Ts'o <tytso@mit.edu> wrote:
> On Sat, Jun 16, 2018 at 09:37:16AM -0400, Noel Chiappa wrote:
> > I can't speak to the motivations of everyone who repeats these stories,
> but my
> > professional career has been littered with examples of poor vision from
> > technical colleagues (some of whom should have known better), against
> which I
> > (in my role as an architect, which is necessarily somewhere where
> long-range
> > thinking is - or should be - a requirement) have struggled again and
> again -
> > sometimes successfully, more often, not....
> >
> > Examples of poor vision are legion - and more importantly, often/usually
> seen
> > to be such _at the time_ by some people - who were not listened to.
>
> To be fair, it's really easy to be wise to after the fact. Let's
> start with Unix; Unix is very bare-bones, when other OS architects
> wanted to add lots of features that were spurned for simplicity's
> sake.
Amen brother. I refer to this as figuring out and understanding what
matters and what is really just window dressing. That is much easier to
do after the fact and for those of us that lived UNIX, we spent a lot of
time defending it. Many of the 'attacks' were from systems like VMS and
RSX that were thought to be more 'complete' or 'professional.'
> Or we could compare X.500 versus LDAP, and X.400 and SMTP.
>
Hmmm. I'll accept X.500, but SMTP I always said was hardly 'simple' -
although compared to what it replaced (FTPing files and remote execution)
is was.
>
> It's easy to mock decisions that weren't forward-thinking enough; but
> it's also really easy to mock failed protocols and designs that
> collapsed of their own weight because architects added too much "maybe
> it will be useful in the future".
>
+1
>
> ...
> Adding a database into the kernel and making it a
> fundamental part of the file system? OK, stupid? How about adding
> all sorts of complexity in VMS and network protocols to support
> record-oriented files?
>
tjt once put it well: 'It's not so bad that RMS has 250-1000 options, but
some has to check for each them on every IO.'
>
> Sometimes having architects being successful to add their "vision" to
> a product can be worst thing that ever happened to a operating sytsem
> or, say, the entire OSI networking protocol suite.
>
I'll always describe it as having 'good taste.' And part of 'good taste'
is learning what really works and what really does not. BTW: having good
taste in one thing does necessarily give you license in another area. And
I think that is a common issues. "Hey were were successfully here, we
must be genius..." Much of DEC's SW as good, but not all of it as an
example. Or to pick on my own BSD expereince, sendmail is a great example
of something that solved a problem we had, but boy do I wish Eric had not
screwed the SMTP Daemon into it ....
>
> > So, is poor vision common? All too common.
>
> Definitely. The problem is it's hard to figure out in advance which
> is poor vision versus brilliant engineering to cut down the design so
> that it is "as simple as possible", but nevertheless, "as complex as
> necessary".
Exactly... or as was said before: *as simple as possible, but not
simpler.*
* *But I like to add that understanding 'possible' is different from 'it
works.'* *
ᐧ
[-- Attachment #2: Type: text/html, Size: 6603 bytes --]
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
2018-06-16 13:37 Noel Chiappa
` (2 preceding siblings ...)
2018-06-17 17:33 ` Theodore Y. Ts'o
@ 2018-06-18 12:36 ` Tony Finch
3 siblings, 0 replies; 73+ messages in thread
From: Tony Finch @ 2018-06-18 12:36 UTC (permalink / raw)
To: Noel Chiappa; +Cc: tuhs
Noel Chiappa <jnc@mercury.lcs.mit.edu> wrote:
>
> (As did [Dave Reed's] wonderful idea for bottom-up address allocation,
> instead of top-down. Oh well.)
Was this written down anywhere?
Tony.
--
f.anthony.n.finch <dot@dotat.at> http://dotat.at/
the widest possible distribution of wealth
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
@ 2018-06-16 12:58 Doug McIlroy
2018-06-20 11:10 ` Dave Horsfall
0 siblings, 1 reply; 73+ messages in thread
From: Doug McIlroy @ 2018-06-16 12:58 UTC (permalink / raw)
To: tuhs
> jay forrester first described an invention called core memory in a lab
notebook 69 years ago today.
Core memory wiped out competing technologies (Williams tube, mercury
delay line, etc) almost instantly and ruled for over twent years. Yet
late in his life Forrester told me that the Whirlwind-connected
invention he was most proud of was marginal testing: running the
voltage up and down once a day to cause shaky vacuum tubes to
fail during scheduled maintenance rather than randomly during
operation. And indeed Wirlwind racked up a notable record of
reliability.
Doug
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
2018-06-16 12:58 Doug McIlroy
@ 2018-06-20 11:10 ` Dave Horsfall
0 siblings, 0 replies; 73+ messages in thread
From: Dave Horsfall @ 2018-06-20 11:10 UTC (permalink / raw)
To: The Eunuchs Hysterical Society
On Sat, 16 Jun 2018, Doug McIlroy wrote:
>> jay forrester first described an invention called core memory in a lab
>> notebook 69 years ago today.
>
> Core memory wiped out competing technologies (Williams tube [...]
Ah, the Williams tube. Was that not the one where women clerks (they
weren't called programmers) wearing nylon stockings were not allowed
anywhere near them, because of the static zaps thus generated? Or was
that yet another urban myth that appears to be polluting my history file?
How said women were determined to be wearing nylons and thus banned, well.
I have no idea... Hell, even a popular computer weekly in the 80s used to
joke that "software" was a female computer programmer; I'm sure that Rear
Admiral Grace Hopper (USN retd, decd) would've had a few words to say
about that sexism...
-- Dave
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
@ 2018-06-16 12:51 Noel Chiappa
2018-06-16 13:11 ` Clem cole
0 siblings, 1 reply; 73+ messages in thread
From: Noel Chiappa @ 2018-06-16 12:51 UTC (permalink / raw)
To: dave; +Cc: tuhs, jnc
> From: Dave Horsfall <dave@horsfall.org>
>> one of the Watson's saying there was a probably market for
>> <single-digit> of computers; Ken Olsen saying people wouldn't want
>> computers in their homes; etc, etc.
> I seem to recall reading somewhere that these were urban myths... Does
> anyone have actual references in their contexts?
Well, for the Watson one, there is some controversy:
https://en.wikipedia.org/wiki/Thomas_J._Watson#Famous_attribution
My guess is that he might actually have said it, and it was passed down orally
for a while before it was first written down. The thing is that he is alleged
to have said it in 1943, and there probably _was_ a market for only 5 of the
kind of computing devices available at that point (e.g. the Mark I).
> E.g. Watson was talking about the multi-megabuck 704/709/7094 etc
No. The 7094 is circa 1960, almost 20 years later.
> Olsens's quote was about the DEC-System 10...
Again, no. He did say it, but it was not about PDP-10s:
https://en.wikiquote.org/wiki/Ken_Olsen
"Olsen later explained that he was referring to smart homes rather than
personal computers." Which sounds plausible (in the sense of 'what he meant',
not 'it was correct'), given where he said it (a World Future Society
meeting).
Noel
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
2018-06-16 12:51 Noel Chiappa
@ 2018-06-16 13:11 ` Clem cole
0 siblings, 0 replies; 73+ messages in thread
From: Clem cole @ 2018-06-16 13:11 UTC (permalink / raw)
To: Noel Chiappa; +Cc: tuhs
And I believe at the time, KO was commenting in terms of Bell’s ‘minimal computer’ definition (the ‘mini’ - aka 12 bit systems) of the day -DEC’s PDP-8 not the 10. IIRC The 8 pretty much had a base price in the $30k range in the mid to late 60s. FWIW the original 8 was discrete bipolar (matched) transistors built with DEC flip chips. And physically about 2or3 19” relay racks in size. Later models used TTL and got down to a single 3U ‘drawer.’
Clem
Also please remember that originally, mini did not mean small. That was a computer press redo when the single chip, ‘micro’ computers, came to being in the mid 1970s
Sent from my PDP-7 Running UNIX V0 expect things to be almost but not quite.
On Jun 16, 2018, at 8:51 AM, Noel Chiappa <jnc@mercury.lcs.mit.edu> wrote:
>> From: Dave Horsfall <dave@horsfall.org>
>
>>> one of the Watson's saying there was a probably market for
>>> <single-digit> of computers; Ken Olsen saying people wouldn't want
>>> computers in their homes; etc, etc.
>
>> I seem to recall reading somewhere that these were urban myths... Does
>> anyone have actual references in their contexts?
>
> Well, for the Watson one, there is some controversy:
>
> https://en.wikipedia.org/wiki/Thomas_J._Watson#Famous_attribution
>
> My guess is that he might actually have said it, and it was passed down orally
> for a while before it was first written down. The thing is that he is alleged
> to have said it in 1943, and there probably _was_ a market for only 5 of the
> kind of computing devices available at that point (e.g. the Mark I).
>
>> E.g. Watson was talking about the multi-megabuck 704/709/7094 etc
>
> No. The 7094 is circa 1960, almost 20 years later.
>
>
>> Olsens's quote was about the DEC-System 10...
>
> Again, no. He did say it, but it was not about PDP-10s:
>
> https://en.wikiquote.org/wiki/Ken_Olsen
>
> "Olsen later explained that he was referring to smart homes rather than
> personal computers." Which sounds plausible (in the sense of 'what he meant',
> not 'it was correct'), given where he said it (a World Future Society
> meeting).
>
> Noel
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
@ 2018-06-15 15:25 Noel Chiappa
2018-06-15 23:05 ` Dave Horsfall
0 siblings, 1 reply; 73+ messages in thread
From: Noel Chiappa @ 2018-06-15 15:25 UTC (permalink / raw)
To: tuhs; +Cc: jnc
> From: "John P. Linderman"
> 4 bucks a bit!
When IBM went to license the core patent(s?) from MIT, they offered MIT a
choice of a large lump sump (the number US$20M sticks in my mind), or US$.01 a
bit.
The negotiators from MIT took the lump sum.
When I first heard this story, I thought it was corrupt folklore, but I later
found it in one of the IBM histories.
This story repets itself over and over again, though: one of the Watson's
saying there was a probably market for <single-digit> of computers; Ken Olsen
saying people wouldn't want computers in their homes; etc, etc.
Noel
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
2018-06-15 15:25 Noel Chiappa
@ 2018-06-15 23:05 ` Dave Horsfall
2018-06-15 23:22 ` Lyndon Nerenberg
0 siblings, 1 reply; 73+ messages in thread
From: Dave Horsfall @ 2018-06-15 23:05 UTC (permalink / raw)
To: The Eunuchs Hysterical Society
On Fri, 15 Jun 2018, Noel Chiappa wrote:
> This story repets itself over and over again, though: one of the
> Watson's saying there was a probably market for <single-digit> of
> computers; Ken Olsen saying people wouldn't want computers in their
> homes; etc, etc.
I seem to recall reading somewhere that these were urban myths... Does
anyone have actual references in their contexts?
E.g. Watson was talking about the multi-megabuck 704/709/7094 etc, and I
think that Olsens's quote was about the DEC-System 10...
-- Dave, who has been known to be wrong before
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
2018-06-15 23:05 ` Dave Horsfall
@ 2018-06-15 23:22 ` Lyndon Nerenberg
2018-06-16 6:36 ` Dave Horsfall
0 siblings, 1 reply; 73+ messages in thread
From: Lyndon Nerenberg @ 2018-06-15 23:22 UTC (permalink / raw)
To: Dave Horsfall; +Cc: The Eunuchs Hysterical Society
>> This story repets itself over and over again, though: one of the Watson's
>> saying there was a probably market for <single-digit> of computers; Ken
>> Olsen saying people wouldn't want computers in their homes; etc, etc.
>
> I seem to recall reading somewhere that these were urban myths... Does
> anyone have actual references in their contexts?
>
> E.g. Watson was talking about the multi-megabuck 704/709/7094 etc, and I
> think that Olsens's quote was about the DEC-System 10...
At the time, those were the only computers out there. The "personal
computer" was the myth of the day. Go back and look at the prices IBM and
DEC were charging for their gear. Even in the modern context, that
hardware was more expensive than the rediculously inflated prices of
housing in Vancouver.
--lyndon
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
2018-06-15 23:22 ` Lyndon Nerenberg
@ 2018-06-16 6:36 ` Dave Horsfall
2018-06-16 19:07 ` Clem Cole
0 siblings, 1 reply; 73+ messages in thread
From: Dave Horsfall @ 2018-06-16 6:36 UTC (permalink / raw)
To: The Eunuchs Hysterical Society
On Fri, 15 Jun 2018, Lyndon Nerenberg wrote:
>> E.g. Watson was talking about the multi-megabuck 704/709/7094 etc, and
>> I think that Olsen's quote was about the DEC-System 10...
>
> At the time, those were the only computers out there. The "personal
> computer" was the myth of the day. Go back and look at the prices IBM
> and DEC were charging for their gear. Even in the modern context, that
> hardware was more expensive than the rediculously inflated prices of
> housing in Vancouver.
Precisely, but idiots keep repeating those "quotes" (if they were repeated
accurately at all) in some sort of an effort to make the so-called
"experts" look silly; a form of reverse jealousy/snobbery or something?
It really pisses me off, and I'm sure that there's a medical term for
it...
I came across a web site that had these populist quotes, alongside the
original *taken in context*, but I'm damned if I can find it now.
I mean, how many people could afford a 7094, FFS? The power bill, the air
conditioning, the team of technicians, the warehouse of spare parts...
Hell, I'll bet that my iPhone has more power than our System-360/50, but
it has nowhere near the sheer I/O throughput of a mainframe :-)
-- Dave
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
2018-06-16 6:36 ` Dave Horsfall
@ 2018-06-16 19:07 ` Clem Cole
2018-06-18 9:25 ` Tim Bradshaw
0 siblings, 1 reply; 73+ messages in thread
From: Clem Cole @ 2018-06-16 19:07 UTC (permalink / raw)
To: Dave Horsfall; +Cc: The Eunuchs Hysterical Society
[-- Attachment #1: Type: text/plain, Size: 1781 bytes --]
On Sat, Jun 16, 2018 at 2:36 AM, Dave Horsfall <dave@horsfall.org> wrote:
> ...
>
> Hell, I'll bet that my iPhone has more power than our System-360/50, but
> it has nowhere near the sheer I/O throughput of a mainframe :-)
I sent Dave's comment to the chief designer of the Model 50 (my friend
Russ Robelen). His reply is cut/pasted below. For context he refers to a
ver rare book called: ‘IBM’s 360 and Early 370 Systems’ by Pugh, Johnson &
J.H.Palmer . A book and other other IBMers like Russ is considered the
biblical text on 360:
As to Dave’s comment: I'll bet that my iPhone has more power than our
System-360/50, but it has nowhere near the sheer I/O throughput of a
mainframe.
The ratio of bits of I/O to CPU MIPS was very high back in those days,
Particularly for the Model 50 which was considered a ‘Commercial machine’
vs a ‘Scientific machine’. The machine was doing payroll and inventory
management, high on I/O low on compute. Much different today even for an
iPhone. The latest iPhone runs on a ARMv8 derivative of Apple's "Swift" *dual
core *architecture called "Cyclone" and it runs at 1.3 GHz. The Mod 50 ran
at 2 MHz. The ARMv8 is a 64 bit machine. The Mod 50 was a 32 bit machine.
The mod 50 had no cache (memory ran at .5 Mhz). Depending on what
instruction mix you want to use I would put the iPhone at conservatively
1,500 times the Mod 50. I might add, a typical Mod 50 system with I/O sold
for $1M.
On the question of memory cost - this is from the bible on 360 mentioned
earlier.
*For example, the Model 50 main memory with a read-write cycle of 2
microseconds cost .8 cents per bit*.
*Page 194 Chapter 4 ‘IBM’s 360 and Early 370 Systems”*
ᐧ
[-- Attachment #2: Type: text/html, Size: 5818 bytes --]
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
2018-06-16 19:07 ` Clem Cole
@ 2018-06-18 9:25 ` Tim Bradshaw
2018-06-19 20:45 ` Peter Jeremy
0 siblings, 1 reply; 73+ messages in thread
From: Tim Bradshaw @ 2018-06-18 9:25 UTC (permalink / raw)
To: Clem Cole; +Cc: The Eunuchs Hysterical Society
Apropos of the 'my iPhone has more power than our System-360/50, but it has nowhere near the sheer I/O throughput of a mainframe' comment: there's obviously no doubt that devices like phones (and laptops, desktops &c) are I/O-starved compared to serious machines, but comparing the performance of an iPhone and a 360/50 seems to be a matter of choosing how fine the dust you want the 360/50 to be ground into should be.
The 360/50 could, I think, transfer 4 bytes every 2 microseconds to/from main memory, which is 20Mb/s. I've just measured my iPhone (6): it can do about 36Mb/s ... over WiFi, backed by a 4G cellular connection (this is towards the phone, the other direction is much worse). Over WiFi with something serious behind it it gets 70-80Mb/s in both directions. I have no idea what the raw WiFi bandwidth limit is, still less what the raw memory bandwidth is or its bandwidth to 'disk' (ie flash or whatever the storage in the phone is), but the phone has much more bandwidth over a partly wireless network to an endpoint tens or hundreds of miles away than the 360/50 had to main memory.
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
2018-06-18 9:25 ` Tim Bradshaw
@ 2018-06-19 20:45 ` Peter Jeremy
2018-06-19 22:55 ` David Arnold
0 siblings, 1 reply; 73+ messages in thread
From: Peter Jeremy @ 2018-06-19 20:45 UTC (permalink / raw)
To: Tim Bradshaw; +Cc: The Eunuchs Hysterical Society
[-- Attachment #1: Type: text/plain, Size: 985 bytes --]
On 2018-Jun-18 10:25:03 +0100, Tim Bradshaw <tfb@tfeb.org> wrote:
>Apropos of the 'my iPhone has more power than our System-360/50, but it has nowhere near the sheer I/O throughput of a mainframe' comment: there's obviously no doubt that devices like phones (and laptops, desktops &c) are I/O-starved compared to serious machines, but comparing the performance of an iPhone and a 360/50 seems to be a matter of choosing how fine the dust you want the 360/50 to be ground into should be.
>
>The 360/50 could, I think, transfer 4 bytes every 2 microseconds to/from main memory, which is 20Mb/s. I've just measured my iPhone (6): it can do about 36Mb/s ... over WiFi, backed by a 4G cellular connection
One way of looking at this actually backs up the claim: An iPhone has maybe
3 orders of magnitude more CPU power than a 360/50 but only a couple of
times the I/O bandwidth. So it's actually got maybe 2 orders of magnitude
less relative I/O throughput.
--
Peter Jeremy
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 963 bytes --]
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
2018-06-19 20:45 ` Peter Jeremy
@ 2018-06-19 22:55 ` David Arnold
2018-06-20 5:04 ` Peter Jeremy
0 siblings, 1 reply; 73+ messages in thread
From: David Arnold @ 2018-06-19 22:55 UTC (permalink / raw)
To: Peter Jeremy; +Cc: The Eunuchs Hysterical Society
Does the screen count as I/O?
I’d suggest that it’s just that the balance is (intentionally) quite different. If you squint right, a GPU could look like a channelized I/O controller.
d
> On 20 Jun 2018, at 06:45, Peter Jeremy <peter@rulingia.com> wrote:
>
>> On 2018-Jun-18 10:25:03 +0100, Tim Bradshaw <tfb@tfeb.org> wrote:
>> Apropos of the 'my iPhone has more power than our System-360/50, but it has nowhere near the sheer I/O throughput of a mainframe' comment: there's obviously no doubt that devices like phones (and laptops, desktops &c) are I/O-starved compared to serious machines, but comparing the performance of an iPhone and a 360/50 seems to be a matter of choosing how fine the dust you want the 360/50 to be ground into should be.
>>
>> The 360/50 could, I think, transfer 4 bytes every 2 microseconds to/from main memory, which is 20Mb/s. I've just measured my iPhone (6): it can do about 36Mb/s ... over WiFi, backed by a 4G cellular connection
>
> One way of looking at this actually backs up the claim: An iPhone has maybe
> 3 orders of magnitude more CPU power than a 360/50 but only a couple of
> times the I/O bandwidth. So it's actually got maybe 2 orders of magnitude
> less relative I/O throughput.
>
> --
> Peter Jeremy
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
2018-06-19 22:55 ` David Arnold
@ 2018-06-20 5:04 ` Peter Jeremy
2018-06-20 5:41 ` Warner Losh
0 siblings, 1 reply; 73+ messages in thread
From: Peter Jeremy @ 2018-06-20 5:04 UTC (permalink / raw)
To: David Arnold; +Cc: The Eunuchs Hysterical Society
[-- Attachment #1: Type: text/plain, Size: 939 bytes --]
On 2018-Jun-20 08:55:05 +1000, David Arnold <davida@pobox.com> wrote:
>Does the screen count as I/O?
I was thinking about that as well. 1080p30 video is around 2MBps as H.264 or
about 140MBps as 6bpp raw. The former is negligible, the latter is still shy
of the disparity in CPU power, especially if you take into account the GPU
power needed to do the decoding.
>I’d suggest that it’s just that the balance is (intentionally) quite different. If you squint right, a GPU could look like a channelized I/O controller.
I agree. Even back then, there was a difference between commercial-oriented
mainframes (the 1401 and 360/50 lineage - which stressed lots of I/O) and the
scientific mainframes (709x, 360/85 - which stressed arithmetic capabilities).
One, not too inaccurate, description of the BCM2835 (RPi SoC) is that it's a
GPU and the sole job of the CPU is to push data into the GPU.
--
Peter Jeremy
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 963 bytes --]
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
2018-06-20 5:04 ` Peter Jeremy
@ 2018-06-20 5:41 ` Warner Losh
0 siblings, 0 replies; 73+ messages in thread
From: Warner Losh @ 2018-06-20 5:41 UTC (permalink / raw)
To: Peter Jeremy; +Cc: The Eunuchs Hysterical Society
[-- Attachment #1: Type: text/plain, Size: 1030 bytes --]
On Tue, Jun 19, 2018 at 11:04 PM, Peter Jeremy <peter@rulingia.com> wrote:
> On 2018-Jun-20 08:55:05 +1000, David Arnold <davida@pobox.com> wrote:
> >Does the screen count as I/O?
>
> I was thinking about that as well. 1080p30 video is around 2MBps as H.264
> or
> about 140MBps as 6bpp raw. The former is negligible, the latter is still
> shy
> of the disparity in CPU power, especially if you take into account the GPU
> power needed to do the decoding.
>
> >I’d suggest that it’s just that the balance is (intentionally) quite
> different. If you squint right, a GPU could look like a channelized I/O
> controller.
>
> I agree. Even back then, there was a difference between
> commercial-oriented
> mainframes (the 1401 and 360/50 lineage - which stressed lots of I/O) and
> the
> scientific mainframes (709x, 360/85 - which stressed arithmetic
> capabilities).
>
So what could an old mainframe do as far as I/O was concerned? Google
didn't provide me a straight forward answer...
Warner
[-- Attachment #2: Type: text/html, Size: 1476 bytes --]
^ permalink raw reply [flat|nested] 73+ messages in thread
* [TUHS] core
@ 2018-06-15 11:19 A. P. Garcia
2018-06-15 13:50 ` Steffen Nurpmeso
` (2 more replies)
0 siblings, 3 replies; 73+ messages in thread
From: A. P. Garcia @ 2018-06-15 11:19 UTC (permalink / raw)
To: tuhs
[-- Attachment #1: Type: text/plain, Size: 127 bytes --]
jay forrester first described an invention called core memory in a lab
notebook 69 years ago today.
kill -3 mailx
core dumped
[-- Attachment #2: Type: text/html, Size: 238 bytes --]
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
2018-06-15 11:19 A. P. Garcia
@ 2018-06-15 13:50 ` Steffen Nurpmeso
2018-06-15 14:21 ` Clem Cole
2018-06-20 10:06 ` Dave Horsfall
2 siblings, 0 replies; 73+ messages in thread
From: Steffen Nurpmeso @ 2018-06-15 13:50 UTC (permalink / raw)
To: tuhs
A. P. Garcia wrote in <CAFCBnZshnexAs8WH8HQmCXU88gLxiW4=FJ2EmnC9ge283ivA\
Xg@mail.gmail.com>:
|jay forrester first described an invention called core memory in a \
|lab notebook 69 years ago today.
|
|kill -3 mailx
|
|core dumped
No no, that will not die. It says "Quit", and then exits.
But maybe it will be able to create text/html via MIME alternative
directly in a few years from now on. In general i am talking
about my own maintained branch here only, of course.
--End of <CAFCBnZshnexAs8WH8HQmCXU88gLxiW4=FJ2EmnC9ge283ivAXg@mail.gmail\
.com>
A nice weekend everybody, may it overtrump the former.
--steffen
|
|Der Kragenbaer, The moon bear,
|der holt sich munter he cheerfully and one by one
|einen nach dem anderen runter wa.ks himself off
|(By Robert Gernhardt)
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
2018-06-15 11:19 A. P. Garcia
2018-06-15 13:50 ` Steffen Nurpmeso
@ 2018-06-15 14:21 ` Clem Cole
2018-06-15 15:11 ` John P. Linderman
2018-06-16 1:08 ` Greg 'groggy' Lehey
2018-06-20 10:06 ` Dave Horsfall
2 siblings, 2 replies; 73+ messages in thread
From: Clem Cole @ 2018-06-15 14:21 UTC (permalink / raw)
To: A. P. Garcia; +Cc: The Eunuchs Hysterical Society
[-- Attachment #1: Type: text/plain, Size: 930 bytes --]
On Fri, Jun 15, 2018 at 7:19 AM, A. P. Garcia <a.phillip.garcia@gmail.com>
wrote:
> jay forrester first described an invention called core memory in a lab
> notebook 69 years ago today.
>
Be careful -- Forrester named it and put it into an array and build a
random access memory with it, but An Wang invented and patented basic
technology we now call 'core' in 1955 2,708,722
<https://patents.google.com/patent/US2708722A/en> (calling it `dynamic
memory'). As I understand it (I'm old, but not that old so I can not
speak from experience, as I was a youngling when this patent came about),
Wang thought Forrester's use of his idea was great, but Wang's patent was
the broader one.
There is an interesting history of Wang and his various fights at: An Wang
- The Man Who Might Have Invented The Personal Computer
<http://www.i-programmer.info/history/people/550-an-wang-wang-laboratories.html>
ᐧ
[-- Attachment #2: Type: text/html, Size: 2020 bytes --]
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
2018-06-15 14:21 ` Clem Cole
@ 2018-06-15 15:11 ` John P. Linderman
2018-06-15 15:21 ` Larry McVoy
2018-06-16 1:08 ` Greg 'groggy' Lehey
1 sibling, 1 reply; 73+ messages in thread
From: John P. Linderman @ 2018-06-15 15:11 UTC (permalink / raw)
To: Clem Cole; +Cc: The Eunuchs Hysterical Society
[-- Attachment #1: Type: text/plain, Size: 1276 bytes --]
Excerpt from the article Clem pointed to:
The company was called Wang Laboratories and it specialised in magnetic
memories. He used his contacts to sell magnetic cores which he built and
sold for $4 each.
4 bucks a bit!
On Fri, Jun 15, 2018 at 10:21 AM, Clem Cole <clemc@ccc.com> wrote:
>
>
> On Fri, Jun 15, 2018 at 7:19 AM, A. P. Garcia <a.phillip.garcia@gmail.com>
> wrote:
>
>> jay forrester first described an invention called core memory in a lab
>> notebook 69 years ago today.
>>
> Be careful -- Forrester named it and put it into an array and build a
> random access memory with it, but An Wang invented and patented basic
> technology we now call 'core' in 1955 2,708,722
> <https://patents.google.com/patent/US2708722A/en> (calling it `dynamic
> memory'). As I understand it (I'm old, but not that old so I can not
> speak from experience, as I was a youngling when this patent came about),
> Wang thought Forrester's use of his idea was great, but Wang's patent was
> the broader one.
>
> There is an interesting history of Wang and his various fights at: An
> Wang - The Man Who Might Have Invented The Personal Computer
> <http://www.i-programmer.info/history/people/550-an-wang-wang-laboratories.html>
>
> ᐧ
>
[-- Attachment #2: Type: text/html, Size: 3660 bytes --]
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
2018-06-15 15:11 ` John P. Linderman
@ 2018-06-15 15:21 ` Larry McVoy
0 siblings, 0 replies; 73+ messages in thread
From: Larry McVoy @ 2018-06-15 15:21 UTC (permalink / raw)
To: John P. Linderman; +Cc: The Eunuchs Hysterical Society
And today it is $.000000009 / bit.
On Fri, Jun 15, 2018 at 11:11:19AM -0400, John P. Linderman wrote:
> Excerpt from the article Clem pointed to:
>
> The company was called Wang Laboratories and it specialised in magnetic
> memories. He used his contacts to sell magnetic cores which he built and
> sold for $4 each.
>
>
> 4 bucks a bit!
>
> On Fri, Jun 15, 2018 at 10:21 AM, Clem Cole <clemc@ccc.com> wrote:
>
> >
> >
> > On Fri, Jun 15, 2018 at 7:19 AM, A. P. Garcia <a.phillip.garcia@gmail.com>
> > wrote:
> >
> >> jay forrester first described an invention called core memory in a lab
> >> notebook 69 years ago today.
> >>
> > ???Be careful -- Forrester named it and put it into an array and build a
> > random access memory with it, but An Wang invented and patented basic
> > technology we now call 'core' in 1955 2,708,722
> > <https://patents.google.com/patent/US2708722A/en> (calling it `dynamic
> > memory')???. As I understand it (I'm old, but not that old so I can not
> > speak from experience, as I was a youngling when this patent came about),
> > Wang thought Forrester's use of his idea was great, but Wang's patent was
> > the broader one.
> >
> > There is an interesting history of Wang and his various fights at: An
> > Wang - The Man Who Might Have Invented The Personal Computer
> > <http://www.i-programmer.info/history/people/550-an-wang-wang-laboratories.html>
> >
> > ???
> >
--
---
Larry McVoy lm at mcvoy.com http://www.mcvoy.com/lm
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
2018-06-15 14:21 ` Clem Cole
2018-06-15 15:11 ` John P. Linderman
@ 2018-06-16 1:08 ` Greg 'groggy' Lehey
2018-06-16 2:00 ` Nemo Nusquam
` (2 more replies)
1 sibling, 3 replies; 73+ messages in thread
From: Greg 'groggy' Lehey @ 2018-06-16 1:08 UTC (permalink / raw)
To: Clem Cole; +Cc: The Eunuchs Hysterical Society
[-- Attachment #1: Type: text/plain, Size: 1041 bytes --]
On Friday, 15 June 2018 at 10:21:44 -0400, Clem Cole wrote:
> On Fri, Jun 15, 2018 at 7:19 AM, A. P. Garcia <a.phillip.garcia@gmail.com>
> wrote:
>
>> jay forrester first described an invention called core memory in a lab
>> notebook 69 years ago today.
>>
> ???Be careful -- Forrester named it and put it into an array and build a
> random access memory with it, but An Wang invented and patented basic
> technology we now call 'core' in 1955 2,708,722
> <https://patents.google.com/patent/US2708722A/en> (calling it `dynamic
> memory')???.
Tha patent may date from 1955, but by that time it was already in use.
Whirlwind I used it in 1949, and the IBM 704 (1954) used it for main
memory. There are some interesting photos at
https://en.wikipedia.org/wiki/Magnetic-core_memory.
Greg
--
Sent from my desktop computer.
Finger grog@lemis.com for PGP public key.
See complete headers for address and phone numbers.
This message is digitally signed. If your Microsoft mail program
reports problems, please read http://lemis.com/broken-MUA
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 163 bytes --]
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
2018-06-16 1:08 ` Greg 'groggy' Lehey
@ 2018-06-16 2:00 ` Nemo Nusquam
2018-06-16 2:17 ` John P. Linderman
2018-06-16 3:06 ` Clem cole
2 siblings, 0 replies; 73+ messages in thread
From: Nemo Nusquam @ 2018-06-16 2:00 UTC (permalink / raw)
To: tuhs
On 06/15/18 21:08, Greg 'groggy' Lehey wrote:
> On Friday, 15 June 2018 at 10:21:44 -0400, Clem Cole wrote:
>> On Fri, Jun 15, 2018 at 7:19 AM, A. P. Garcia <a.phillip.garcia@gmail.com>
>> wrote:
>>
>>> jay forrester first described an invention called core memory in a lab
>>> notebook 69 years ago today.
>>>
>> ???Be careful -- Forrester named it and put it into an array and build a
>> random access memory with it, but An Wang invented and patented basic
>> technology we now call 'core' in 1955 2,708,722
>> <https://patents.google.com/patent/US2708722A/en> (calling it `dynamic
>> memory')???.
>
> Tha patent may date from 1955,
The patent issued in 1955 but the priority date is listed as 1949-10-21.
N.
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
2018-06-16 1:08 ` Greg 'groggy' Lehey
2018-06-16 2:00 ` Nemo Nusquam
@ 2018-06-16 2:17 ` John P. Linderman
2018-06-16 3:06 ` Clem cole
2 siblings, 0 replies; 73+ messages in thread
From: John P. Linderman @ 2018-06-16 2:17 UTC (permalink / raw)
To: Greg 'groggy' Lehey; +Cc: The Eunuchs Hysterical Society
[-- Attachment #1: Type: text/plain, Size: 1506 bytes --]
Forrester was my freshman adviser. A remarkably nice person. For those of
us who weren't going home for Thanksgiving, he invited us to his home. Well
above the call of duty. It's my understanding that his patent wasn't
anywhere near the biggest moneymaker. A patent for the production of
penicillin was, as I understood, the biggie.
On Fri, Jun 15, 2018 at 9:08 PM, Greg 'groggy' Lehey <grog@lemis.com> wrote:
> On Friday, 15 June 2018 at 10:21:44 -0400, Clem Cole wrote:
> > On Fri, Jun 15, 2018 at 7:19 AM, A. P. Garcia <
> a.phillip.garcia@gmail.com>
> > wrote:
> >
> >> jay forrester first described an invention called core memory in a lab
> >> notebook 69 years ago today.
> >>
> > ???Be careful -- Forrester named it and put it into an array and build a
> > random access memory with it, but An Wang invented and patented basic
> > technology we now call 'core' in 1955 2,708,722
> > <https://patents.google.com/patent/US2708722A/en> (calling it `dynamic
> > memory')???.
>
> Tha patent may date from 1955, but by that time it was already in use.
> Whirlwind I used it in 1949, and the IBM 704 (1954) used it for main
> memory. There are some interesting photos at
> https://en.wikipedia.org/wiki/Magnetic-core_memory.
>
> Greg
> --
> Sent from my desktop computer.
> Finger grog@lemis.com for PGP public key.
> See complete headers for address and phone numbers.
> This message is digitally signed. If your Microsoft mail program
> reports problems, please read http://lemis.com/broken-MUA
>
[-- Attachment #2: Type: text/html, Size: 2442 bytes --]
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
2018-06-16 1:08 ` Greg 'groggy' Lehey
2018-06-16 2:00 ` Nemo Nusquam
2018-06-16 2:17 ` John P. Linderman
@ 2018-06-16 3:06 ` Clem cole
2 siblings, 0 replies; 73+ messages in thread
From: Clem cole @ 2018-06-16 3:06 UTC (permalink / raw)
To: Greg 'groggy' Lehey; +Cc: The Eunuchs Hysterical Society
Greg- Sorry if my words read and came to be interpreted to imply 1955 was the date of the invention. I only wanted clarify that Wang had the idea of magnetic (core) memory before Forrester. 1955 is the date of patent grant as you mentioned. My primary point was to be careful about giving all the credit to Forrester. It took both as I understand the history. Again I was not part of that fight 😉
It seemed the courts have said what I mentioned - it was Wang’s original idea and I just wanted the history stayed more clearly. That said, obviously Forrester improved on it (made it practical). And IBM needed licenses for both btw build their products.
FWIW: I’ve been on a couple corporate patent committees. I called him on the comment because I knew that we use core history sometimes as an example when we try to teach young inventors how develop a proper invention disclosure (and how I was taught about some of these issues). What Forrester did I have seen used as an example of an improvement and differentiation from a previous idea. That said, Wang had the fundamental patent and Forrester needed to rely on Wang’s idea to “reduce to practice” his own. As I said, IBM needed to license both in the end to make a product.
What we try to teach is how something is new (novel in patent-speak) and to make sure they disclose what there ideas are built upon. And as importantly, are you truly novel and if you are - can you build on that previous idea without a license.
This is actually pretty important to get right and is not just an academic exercise. For instance, a few years ago I was granted a fundamental computer synchronization patent for use in building supercomputers out of smaller computers (ie clusters). When we wrote the stuff for disclosure we had to show how what was the same, what was built upon, and what made it new/different. Computer synchronization is an old idea but what we did was quite new (and frankly would not have been considered in the 60s as those designers did have some of issues in scale we have today). But because we were able to show both the base and how it was novel, the application went right through in both the USA and Internationally.
So back to my point, Forrester came up with the idea and practical scheme to use Wang’s concept of the magnetic memory in an array, which as I understand it, was what made Wang’s idea practical. Just as my scheme did not invent synchronization, but tradition 1960 style schemes are impractical with today’s technology.
Sent from my PDP-7 Running UNIX V0 expect things to be almost but not quite.
> On Jun 15, 2018, at 9:08 PM, Greg 'groggy' Lehey <grog@lemis.com> wrote:
>
>> On Friday, 15 June 2018 at 10:21:44 -0400, Clem Cole wrote:
>> On Fri, Jun 15, 2018 at 7:19 AM, A. P. Garcia <a.phillip.garcia@gmail.com>
>> wrote:
>>
>>> jay forrester first described an invention called core memory in a lab
>>> notebook 69 years ago today.
>>>
>> ???Be careful -- Forrester named it and put it into an array and build a
>> random access memory with it, but An Wang invented and patented basic
>> technology we now call 'core' in 1955 2,708,722
>> <https://patents.google.com/patent/US2708722A/en> (calling it `dynamic
>> memory')???.
>
> Tha patent may date from 1955, but by that time it was already in use.
> Whirlwind I used it in 1949, and the IBM 704 (1954) used it for main
> memory. There are some interesting photos at
> https://en.wikipedia.org/wiki/Magnetic-core_memory.
>
> Greg
> --
> Sent from my desktop computer.
> Finger grog@lemis.com for PGP public key.
> See complete headers for address and phone numbers.
> This message is digitally signed. If your Microsoft mail program
> reports problems, please read http://lemis.com/broken-MUA
^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [TUHS] core
2018-06-15 11:19 A. P. Garcia
2018-06-15 13:50 ` Steffen Nurpmeso
2018-06-15 14:21 ` Clem Cole
@ 2018-06-20 10:06 ` Dave Horsfall
2 siblings, 0 replies; 73+ messages in thread
From: Dave Horsfall @ 2018-06-20 10:06 UTC (permalink / raw)
To: The Eunuchs Hysterical Society
(Yeah, I'm a bit behind right now; I got involved in some electronics stuff
for a change.)
On Fri, 15 Jun 2018, A. P. Garcia wrote:
> jay forrester first described an invention called core memory in a lab
> notebook 69 years ago today.
Odd, I don't have that in my history notes; may I use it?
-- Dave
^ permalink raw reply [flat|nested] 73+ messages in thread
end of thread, other threads:[~2018-06-21 23:39 UTC | newest]
Thread overview: 73+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
[not found] <mailman.1.1529175600.3826.tuhs@minnie.tuhs.org>
2018-06-16 22:14 ` [TUHS] core Johnny Billquist
2018-06-16 22:38 ` Arthur Krewat
2018-06-17 0:15 ` Clem cole
2018-06-17 1:50 ` Johnny Billquist
2018-06-17 10:33 ` Ronald Natalie
2018-06-20 16:55 ` Paul Winalski
2018-06-20 21:35 ` Johnny Billquist
2018-06-20 22:24 ` Paul Winalski
2018-06-21 22:44 Nelson H. F. Beebe
2018-06-21 23:07 ` Grant Taylor via TUHS
2018-06-21 23:38 ` Toby Thain
-- strict thread matches above, loose matches on Subject: below --
2018-06-21 17:40 Noel Chiappa
2018-06-21 17:07 Nelson H. F. Beebe
2018-06-21 13:46 Doug McIlroy
2018-06-21 16:13 ` Tim Bradshaw
2018-06-20 20:11 Doug McIlroy
2018-06-20 23:53 ` Tim Bradshaw
2018-06-19 12:23 Noel Chiappa
2018-06-19 12:58 ` Clem Cole
2018-06-19 13:53 ` John P. Linderman
2018-06-21 14:09 ` Paul Winalski
2018-06-19 11:50 Doug McIlroy
[not found] <20180618121738.98BD118C087@mercury.lcs.mit.edu>
2018-06-18 21:13 ` Johnny Billquist
2018-06-18 17:56 Noel Chiappa
2018-06-18 18:51 ` Clem Cole
2018-06-18 14:51 Noel Chiappa
2018-06-18 14:58 ` Warner Losh
[not found] <mailman.1.1529287201.13697.tuhs@minnie.tuhs.org>
2018-06-18 5:58 ` Johnny Billquist
2018-06-18 12:39 ` Ronald Natalie
2018-06-17 21:18 Noel Chiappa
2018-06-17 17:58 Noel Chiappa
2018-06-18 9:16 ` Tim Bradshaw
2018-06-18 15:33 ` Tim Bradshaw
2018-06-17 14:36 Noel Chiappa
2018-06-17 15:58 ` Derek Fawcus
2018-06-16 22:57 Noel Chiappa
2018-06-16 13:49 Noel Chiappa
2018-06-16 14:10 ` Clem Cole
2018-06-16 14:34 ` Lawrence Stewart
2018-06-16 15:28 ` Larry McVoy
2018-06-16 14:10 ` Steve Nickolas
2018-06-16 14:13 ` William Pechter
2018-06-16 13:37 Noel Chiappa
2018-06-16 18:59 ` Clem Cole
2018-06-17 12:15 ` Derek Fawcus
2018-06-17 17:33 ` Theodore Y. Ts'o
2018-06-17 19:50 ` Jon Forrest
2018-06-18 14:56 ` Clem Cole
2018-06-18 12:36 ` Tony Finch
2018-06-16 12:58 Doug McIlroy
2018-06-20 11:10 ` Dave Horsfall
2018-06-16 12:51 Noel Chiappa
2018-06-16 13:11 ` Clem cole
2018-06-15 15:25 Noel Chiappa
2018-06-15 23:05 ` Dave Horsfall
2018-06-15 23:22 ` Lyndon Nerenberg
2018-06-16 6:36 ` Dave Horsfall
2018-06-16 19:07 ` Clem Cole
2018-06-18 9:25 ` Tim Bradshaw
2018-06-19 20:45 ` Peter Jeremy
2018-06-19 22:55 ` David Arnold
2018-06-20 5:04 ` Peter Jeremy
2018-06-20 5:41 ` Warner Losh
2018-06-15 11:19 A. P. Garcia
2018-06-15 13:50 ` Steffen Nurpmeso
2018-06-15 14:21 ` Clem Cole
2018-06-15 15:11 ` John P. Linderman
2018-06-15 15:21 ` Larry McVoy
2018-06-16 1:08 ` Greg 'groggy' Lehey
2018-06-16 2:00 ` Nemo Nusquam
2018-06-16 2:17 ` John P. Linderman
2018-06-16 3:06 ` Clem cole
2018-06-20 10:06 ` Dave Horsfall
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).