* Re: [TUHS] core
@ 2018-06-21 22:44 Nelson H. F. Beebe
2018-06-21 23:07 ` Grant Taylor via TUHS
0 siblings, 1 reply; 133+ messages in thread
From: Nelson H. F. Beebe @ 2018-06-21 22:44 UTC (permalink / raw)
To: tuhs
The discussion of the last few days about the Colossus is getting
somewhat off-topic for Unix heritage and history, but perhaps this
list of titles (truncated to 80 characters) and locations may be of
interest to some TUHS list readers.
------------------------------------------------------------------------
Papers and book chapters about the Colossus at Bletchley Park
Journal- and subject specific Bibliography files are found at
http://www.math.utah.edu/pub/tex/bib/NAME.bib
and author-specific ones under
http://www.math.utah.edu/pub/bibnet/authors/N/
where N is the first letter of the bibliography filename.
The first line in each group contains the bibliography filename, and
the citation label of the cited publication. DOIs are supplied
whenever they are available. Paragraphs are sorted by filename.
adabooks.bib Randell:1976:C
The COLOSSUS
annhistcomput.bib Good:1979:EWC
Early Work on Computers at Bletchley
https://doi.org/10.1109/MAHC.1979.10011
annhistcomput.bib deLeeuw:2007:HIS
Tunny and Colossus: breaking the Lorenz Schl{\"u}sselzusatz traffic
https://www.sciencedirect.com/science/article/pii/B978044451608450016X
babbage-charles.bib Randell:1976:C
The COLOSSUS
cacm2010.bib Anderson:2013:MNF
Max Newman: forgotten man of early British computing
https://doi.org/10.1145/2447976.2447986
computer1990.bib Anonymous:1996:UWI
Update: WW II Colossus computer is restored to operation
cryptography.bib Good:1979:EWC
Early Work on Computers at Bletchley
cryptography1990.bib Anonymous:1997:CBP
The Colossus of Bletchley Park, IEEE Rev., vol. 41, no. 2, pp. 55--59 [Book Revi
https://doi.org/10.1109/MAHC.1997.560758
cryptography2000.bib Rojas:2000:FCH
The first computers: history and architectures
cryptography2010.bib Copeland:2010:CBG
Colossus: Breaking the German `Tunny' Code at Bletchley Park. An Illustrated His
cryptologia.bib Michie:2002:CBW
Colossus and the Breaking of the Wartime ``Fish'' Codes
debroglie-louis.bib Homberger:1970:CMN
The Cambridge mind: ninety years of the booktitle Cambridge Review, 1879--1969
dijkstra-edsger-w.bib Randell:1980:C
The COLOSSUS
dyson-freeman-j.bib Brockman:2015:WTA
What to think about machines that think: today's leading thinkers on the age of
fparith.bib Randell:1977:CGC
Colossus: Godfather of the Computer
ieeeannhistcomput.bib Anonymous:1997:CBP
The Colossus of Bletchley Park, IEEE Rev., vol. 41, no. 2, pp. 55--59 [Book Revi
https://doi.org/10.1109/MAHC.1997.560758
lncs1997b.bib Carter:1997:BLC
The Breaking of the Lorenz Cipher: An Introduction to the Theory behind the Oper
lncs2000.bib Sale:2000:CGL
Colossus and the German Lorenz Cipher --- Code Breaking in WW II
master.bib Randell:1977:CGC
Colossus: Godfather of the Computer
mathcw.bib Randell:1982:ODC
The Origins of Digital Computers: Selected Papers
http://dx.doi.org/10.1007/978-3-642-61812-3
rutherford-ernest.bib Homberger:1970:CMN
The Cambridge mind: ninety years of the booktitle Cambridge Review, 1879--1969
rutherfordj.bib Copeland:2010:CBG
Colossus: Breaking the German `Tunny' Code at Bletchley Park. An Illustrated His
sigcse1990.bib Plimmer:1998:MIW
Machines invented for WW II code breaking
https://doi.org/10.1145/306286.306309
turing-alan-mathison.bib Randell:1976:C
The COLOSSUS
von-neumann-john.bib Randell:1982:ODC
The Origins of Digital Computers: Selected Papers
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe@math.utah.edu -
- 155 S 1400 E RM 233 beebe@acm.org beebe@computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
2018-06-21 22:44 [TUHS] core Nelson H. F. Beebe
@ 2018-06-21 23:07 ` Grant Taylor via TUHS
2018-06-21 23:38 ` Toby Thain
2018-06-21 23:47 ` [TUHS] off-topic list Warren Toomey
0 siblings, 2 replies; 133+ messages in thread
From: Grant Taylor via TUHS @ 2018-06-21 23:07 UTC (permalink / raw)
To: tuhs
[-- Attachment #1: Type: text/plain, Size: 589 bytes --]
On 06/21/2018 04:44 PM, Nelson H. F. Beebe wrote:
> The discussion of the last few days about the Colossus is getting somewhat
> off-topic for Unix heritage and history
This is the the first time that this problem has come up.
Would it be worth while to have another (sub)mailing list that such
topics can be moved to?
Given that the topics linger makes me think that people are interested
in having the conversations. So perhaps another venue is worthwhile.
Sort of like cctalk & cctech.
I make a motion for tuhs-off-topic.
--
Grant. . . .
unix || die
[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 3982 bytes --]
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
2018-06-21 23:07 ` Grant Taylor via TUHS
@ 2018-06-21 23:38 ` Toby Thain
2018-06-21 23:47 ` [TUHS] off-topic list Warren Toomey
1 sibling, 0 replies; 133+ messages in thread
From: Toby Thain @ 2018-06-21 23:38 UTC (permalink / raw)
To: Grant Taylor, tuhs
On 2018-06-21 7:07 PM, Grant Taylor via TUHS wrote:
> On 06/21/2018 04:44 PM, Nelson H. F. Beebe wrote:
>> The discussion of the last few days about the Colossus is getting
>> somewhat off-topic for Unix heritage and history
>
> This is the the first time that this problem has come up.
>
> Would it be worth while to have another (sub)mailing list that such
> topics can be moved to?
>
> Given that the topics linger makes me think that people are interested
> in having the conversations. So perhaps another venue is worthwhile.
>
> Sort of like cctalk & cctech.
...seems like you just named an appropriate venue for that conversation...
--T
>
> I make a motion for tuhs-off-topic.
>
>
>
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] off-topic list
2018-06-21 23:07 ` Grant Taylor via TUHS
2018-06-21 23:38 ` Toby Thain
@ 2018-06-21 23:47 ` Warren Toomey
2018-06-22 1:11 ` Grant Taylor via TUHS
` (2 more replies)
1 sibling, 3 replies; 133+ messages in thread
From: Warren Toomey @ 2018-06-21 23:47 UTC (permalink / raw)
Cc: tuhs
On Thu, Jun 21, 2018 at 05:07:48PM -0600, Grant Taylor via TUHS wrote:
>This is the the first time that this problem has come up.
No :-)
>Would it be worth while to have another (sub)mailing list that such
>topics can be moved to? I make a motion for tuhs-off-topic.
I'm happy to make a mailing list here for it. But perhaps a name that
reflects its content. Computing history is maybe too generic. Ideas?
Warren
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] off-topic list
2018-06-21 23:47 ` [TUHS] off-topic list Warren Toomey
@ 2018-06-22 1:11 ` Grant Taylor via TUHS
2018-06-22 3:53 ` Robert Brockway
2018-06-22 4:18 ` Dave Horsfall
2 siblings, 0 replies; 133+ messages in thread
From: Grant Taylor via TUHS @ 2018-06-22 1:11 UTC (permalink / raw)
To: tuhs
On 06/21/2018 05:47 PM, Warren Toomey wrote:
>> This is the the first time that this problem has come up.
This is *not* the first time that this problem has come up.
Apparently I can't articulate myself properly.
> No :-)
;-)
> I'm happy to make a mailing list here for it. But perhaps a name that
> reflects its content. Computing history is maybe too generic. Ideas?
"tuhs-open-discussion" It's a play on Open Systems and still related to
things tangentially related to Unix.
"tuhs-cantina"
¯\_(ツ)_/¯
--
Grant. . . .
unix || die
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] off-topic list
2018-06-21 23:47 ` [TUHS] off-topic list Warren Toomey
2018-06-22 1:11 ` Grant Taylor via TUHS
@ 2018-06-22 3:53 ` Robert Brockway
2018-06-22 4:18 ` Dave Horsfall
2 siblings, 0 replies; 133+ messages in thread
From: Robert Brockway @ 2018-06-22 3:53 UTC (permalink / raw)
To: Warren Toomey; +Cc: tuhs
On Fri, 22 Jun 2018, Warren Toomey wrote:
> I'm happy to make a mailing list here for it. But perhaps a name that
> reflects its content. Computing history is maybe too generic. Ideas?
Other groups have often elected to go with -chat.
[TUHS-chat]?
Rob
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] off-topic list
2018-06-21 23:47 ` [TUHS] off-topic list Warren Toomey
2018-06-22 1:11 ` Grant Taylor via TUHS
2018-06-22 3:53 ` Robert Brockway
@ 2018-06-22 4:18 ` Dave Horsfall
2018-06-22 11:44 ` Arthur Krewat
` (2 more replies)
2 siblings, 3 replies; 133+ messages in thread
From: Dave Horsfall @ 2018-06-22 4:18 UTC (permalink / raw)
To: The Eunuchs Hysterical Society
On Fri, 22 Jun 2018, Warren Toomey wrote:
> I'm happy to make a mailing list here for it. But perhaps a name that
> reflects its content. Computing history is maybe too generic. Ideas?
COFF - Computer Old Fart Followers.
-- Dave
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] off-topic list
2018-06-22 4:18 ` Dave Horsfall
@ 2018-06-22 11:44 ` Arthur Krewat
2018-06-22 14:28 ` Larry McVoy
2018-06-22 17:29 ` Cág
2 siblings, 0 replies; 133+ messages in thread
From: Arthur Krewat @ 2018-06-22 11:44 UTC (permalink / raw)
To: tuhs
I was trying to come up with an acronym involving "old" - you got it ;)
On 6/22/2018 12:18 AM, Dave Horsfall wrote:
> COFF - Computer Old Fart Followers.
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] off-topic list
2018-06-22 4:18 ` Dave Horsfall
2018-06-22 11:44 ` Arthur Krewat
@ 2018-06-22 14:28 ` Larry McVoy
2018-06-22 14:46 ` Tim Bradshaw
` (4 more replies)
2018-06-22 17:29 ` Cág
2 siblings, 5 replies; 133+ messages in thread
From: Larry McVoy @ 2018-06-22 14:28 UTC (permalink / raw)
To: Dave Horsfall; +Cc: The Eunuchs Hysterical Society
For the record, I'm fine with old stuff getting discussed on TUHS.
Even not Unix stuff. We wandered into Linux/ext2 with Ted, that was fun.
We've wandered into the VAX history (UWisc got an 8600 and called it
"speedy" - what a mistake) and I learned stuff I didn't know, so that
was fun (and sounded just like history at every company I've worked for,
sadly :)
I think this list has self selected for adults who are reasonable. So
we mostly are fine.
Warren, I'd ask your heavy hitters, like Ken, if it's ok if the list
wanders a bit. If he and his ilk are fine, I'd just leave it all on
one list. It's a fun list.
--
---
Larry McVoy lm at mcvoy.com http://www.mcvoy.com/lm
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] off-topic list
2018-06-22 14:28 ` Larry McVoy
@ 2018-06-22 14:46 ` Tim Bradshaw
2018-06-22 14:54 ` Larry McVoy
2018-06-22 14:52 ` Ralph Corderoy
` (3 subsequent siblings)
4 siblings, 1 reply; 133+ messages in thread
From: Tim Bradshaw @ 2018-06-22 14:46 UTC (permalink / raw)
To: Larry McVoy; +Cc: The Eunuchs Hysterical Society
[-- Attachment #1: Type: text/plain, Size: 1233 bytes --]
On 22 Jun 2018, at 15:28, Larry McVoy <lm@mcvoy.com> wrote:
>
> For the record, I'm fine with old stuff getting discussed on TUHS.
> Even not Unix stuff. We wandered into Linux/ext2 with Ted, that was fun.
> We've wandered into the VAX history (UWisc got an 8600 and called it
> "speedy" - what a mistake) and I learned stuff I didn't know, so that
> was fun (and sounded just like history at every company I've worked for,
> sadly :)
As a perpetrator of some of this off-topicness I think (on reflection, I originally also wondered about a different list) that a good approach would be to allow anyone, if something is just clearly off-topic, to say 'please take this to a more appropriate forum', but if no-one does so it should be fine. Obviously this requires people to actually do that, but I hope no-one sits and fumes without saying anything. Equally obviously this is just my opinion.
(My main problem with off-topic things is now that the OSX mail client no longer seems to be able to reliably thread things which I find an astonishing regression, so I have to delete several threads. I suspect it gets precisely no care and feeding from Apple, at least not from anyone who understands email.)
--tim
[-- Attachment #2: Type: text/html, Size: 5474 bytes --]
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] off-topic list
2018-06-22 14:46 ` Tim Bradshaw
@ 2018-06-22 14:54 ` Larry McVoy
2018-06-22 15:17 ` Steffen Nurpmeso
2018-06-22 16:07 ` Tim Bradshaw
0 siblings, 2 replies; 133+ messages in thread
From: Larry McVoy @ 2018-06-22 14:54 UTC (permalink / raw)
To: Tim Bradshaw; +Cc: The Eunuchs Hysterical Society
On Fri, Jun 22, 2018 at 03:46:06PM +0100, Tim Bradshaw wrote:
> (My main problem with off-topic things is now that the OSX mail client no longer seems to be able to reliably thread things which I find an astonishing regression, so I have to delete several threads. I suspect it gets precisely no care and feeding from Apple, at least not from anyone who understands email.)
Try mutt, it's what I use and it threads topics just fine.
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] off-topic list
2018-06-22 14:54 ` Larry McVoy
@ 2018-06-22 15:17 ` Steffen Nurpmeso
2018-06-22 17:27 ` Grant Taylor via TUHS
2018-06-22 16:07 ` Tim Bradshaw
1 sibling, 1 reply; 133+ messages in thread
From: Steffen Nurpmeso @ 2018-06-22 15:17 UTC (permalink / raw)
To: Larry McVoy; +Cc: The Eunuchs Hysterical Society
Larry McVoy wrote in <20180622145402.GT21272@mcvoy.com>:
|On Fri, Jun 22, 2018 at 03:46:06PM +0100, Tim Bradshaw wrote:
|> (My main problem with off-topic things is now that the OSX mail client \
|> no longer seems to be able to reliably thread things which I find \
|> an astonishing regression, so I have to delete several threads. \
|> I suspect it gets precisely no care and feeding from Apple, at least \
|> not from anyone who understands email.)
|
|Try mutt, it's what I use and it threads topics just fine.
--End of <20180622145402.GT21272@mcvoy.com>
I understood that as a request that people seem to have forgotten
that In-Reply-To: should be removed when doing the "Yz (Was: Xy)",
so that a new thread is created, actually. Or take me: i seem to
never have learned that at first!
--steffen
|
|Der Kragenbaer, The moon bear,
|der holt sich munter he cheerfully and one by one
|einen nach dem anderen runter wa.ks himself off
|(By Robert Gernhardt)
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] off-topic list
2018-06-22 15:17 ` Steffen Nurpmeso
@ 2018-06-22 17:27 ` Grant Taylor via TUHS
2018-06-22 19:25 ` Steffen Nurpmeso
0 siblings, 1 reply; 133+ messages in thread
From: Grant Taylor via TUHS @ 2018-06-22 17:27 UTC (permalink / raw)
To: tuhs
[-- Attachment #1: Type: text/plain, Size: 710 bytes --]
On 06/22/2018 09:17 AM, Steffen Nurpmeso wrote:
> I understood that as a request that people seem to have forgotten that
> In-Reply-To: should be removed when doing the "Yz (Was: Xy)", so that
> a new thread is created, actually. Or take me: i seem to never have
> learned that at first!
I agree that removing In-Reply-To and References is a good thing to do
when breaking a thread. But not all MUAs make that possible, much less
easy. Which means that you're left with starting a new message to the
same recipients.
I've also seen how (sub)threads tend to drift from their original intent
and then someone modifies the subject some time later.
--
Grant. . . .
unix || die
[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 3982 bytes --]
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] off-topic list
2018-06-22 17:27 ` Grant Taylor via TUHS
@ 2018-06-22 19:25 ` Steffen Nurpmeso
2018-06-22 21:04 ` Grant Taylor via TUHS
0 siblings, 1 reply; 133+ messages in thread
From: Steffen Nurpmeso @ 2018-06-22 19:25 UTC (permalink / raw)
To: Grant Taylor via TUHS
Grant Taylor via TUHS wrote in <b6ef82de-739a-ed8e-0e91-3abfa2fb5f07@spa\
mtrap.tnetconsulting.net>:
|On 06/22/2018 09:17 AM, Steffen Nurpmeso wrote:
|> I understood that as a request that people seem to have forgotten that
|> In-Reply-To: should be removed when doing the "Yz (Was: Xy)", so that
|> a new thread is created, actually. Or take me: i seem to never have
|> learned that at first!
|
|I agree that removing In-Reply-To and References is a good thing to do
|when breaking a thread. But not all MUAs make that possible, much less
|easy. Which means that you're left with starting a new message to the
|same recipients.
True, but possible since some time, for the thing i maintain, too,
unfortunately. It will be easy starting with the next release,
however.
|I've also seen how (sub)threads tend to drift from their original intent
|and then someone modifies the subject some time later.
Yes. Yes. And then, whilst not breaking the thread stuff as
such, there is the "current funny thing to do", which also impacts
thread visualization sometimes. For example replacing spaces with
tabulators so that the "is the same thread subject" compression
cannot work, so one first thinks the subject has really changed.
At the moment top posting seems to be a fascinating thing to do.
And you seem to be using DMARC, which irritates the list-reply
mechanism of at least my MUA.
--steffen
|
|Der Kragenbaer, The moon bear,
|der holt sich munter he cheerfully and one by one
|einen nach dem anderen runter wa.ks himself off
|(By Robert Gernhardt)
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] off-topic list
2018-06-22 19:25 ` Steffen Nurpmeso
@ 2018-06-22 21:04 ` Grant Taylor via TUHS
2018-06-23 14:49 ` Steffen Nurpmeso
0 siblings, 1 reply; 133+ messages in thread
From: Grant Taylor via TUHS @ 2018-06-22 21:04 UTC (permalink / raw)
To: tuhs
[-- Attachment #1: Type: text/plain, Size: 1675 bytes --]
On 06/22/2018 01:25 PM, Steffen Nurpmeso wrote:
> True, but possible since some time, for the thing i maintain, too,
> unfortunately. It will be easy starting with the next release, however.
I just spent a few minutes looking at how to edit headers in reply
messages in Thunderbird and I didn't quickly find it. (I do find an
Add-On that allows editing messages in reader, but not the composer.)
> Yes. Yes. And then, whilst not breaking the thread stuff as such,
> there is the "current funny thing to do", which also impacts thread
> visualization sometimes. For example replacing spaces with tabulators
> so that the "is the same thread subject" compression cannot work, so
> one first thinks the subject has really changed.
IMHO that's the wrong way to thread. I believe threading should be done
by the In-Reply-To: and References: headers.
I consider Subject: based threading to be a hack. But it's a hack that
many people use. I think Thunderbird even uses it by default. (I've
long since disabled it.)
> At the moment top posting seems to be a fascinating thing to do.
I blame ignorance and the prevalence of tools that encourage such behavior.
> And you seem to be using DMARC, which irritates the list-reply mechanism
> of at least my MUA.
Yes I do use DMARC as well as DKIM and SPF (w/ -all). I don't see how
me using that causes problems with "list-reply".
My working understanding is that "list-reply" should reply to the list's
posting address in the List-Post: header.
List-Post: <mailto:tuhs@minnie.tuhs.org>
What am I missing or not understanding?
--
Grant. . . .
unix || die
[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 3982 bytes --]
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] off-topic list
2018-06-22 21:04 ` Grant Taylor via TUHS
@ 2018-06-23 14:49 ` Steffen Nurpmeso
2018-06-23 15:25 ` Toby Thain
2018-06-23 18:49 ` Grant Taylor via TUHS
0 siblings, 2 replies; 133+ messages in thread
From: Steffen Nurpmeso @ 2018-06-23 14:49 UTC (permalink / raw)
To: Grant Taylor via TUHS
Hello.
Grant Taylor via TUHS wrote in <89e5ae21-ccc0-5c84-837b-120a1a7d9e26@spa\
mtrap.tnetconsulting.net>:
|On 06/22/2018 01:25 PM, Steffen Nurpmeso wrote:
|> True, but possible since some time, for the thing i maintain, too,
|> unfortunately. It will be easy starting with the next release, however.
|
|I just spent a few minutes looking at how to edit headers in reply
|messages in Thunderbird and I didn't quickly find it. (I do find an
|Add-On that allows editing messages in reader, but not the composer.)
Oh, I do not know: i have never used a graphical MUA, only pine,
then mutt, and now, while anytime before now and then anyway, BSD
Mail. Only once i implemented RFC 2231 support i started up Apple
Mail (of an up-to-date Snow Leopard i had back then) to see how
they do it, and that was a brainwave because they i think
misinterpreted RFC 2231 to only allow MIME paramaters to be split
in ten sections. (But then i recall that i have retested that
with a Tiger Mail once i had a chance to, and there the bug was
corrected.)
Graphical user interfaces are a difficult abstraction, or tedious
to use. I have to use a graphical browser, and it is always
terrible to enable Cookies for a site. For my thing i hope we
will at some future day be so resilient that users can let go the
nmh mailer without loosing any freedom.
I mean, user interfaces are really a pain, and i think this will
not go away until we come to that brain implant which i have no
doubt will arrive some day, and then things may happen with
a think. Things like emacs or Acme i can understand, and the
latter is even Unix like in the way it works.
Interesting that most old a.k.a. established Unix people give up
that Unix freedom of everything-is-a-file, that was there for
email access via nupas -- the way i have seen it in Plan9 (i never
ran a research Unix), at least -- in favour of a restrictive
graphical user interface!
|> Yes. Yes. And then, whilst not breaking the thread stuff as such,
|> there is the "current funny thing to do", which also impacts thread
|> visualization sometimes. For example replacing spaces with tabulators
|> so that the "is the same thread subject" compression cannot work, so
|> one first thinks the subject has really changed.
|
|IMHO that's the wrong way to thread. I believe threading should be done
|by the In-Reply-To: and References: headers.
|
|I consider Subject: based threading to be a hack. But it's a hack that
|many people use. I think Thunderbird even uses it by default. (I've
|long since disabled it.)
No, we use the same threading algorithm that Zawinski described
([1], "the threading algorithm that was used in Netscape Mail and
News 2.0 and 3.0"). I meant, in a threaded display, successive
follow-up messages which belong to the same thread will not
reiterate the Subject:, because it is the same one as before, and
that is irritating.
[1] http://www.jwz.org/doc/threading.html
|> At the moment top posting seems to be a fascinating thing to do.
|
|I blame ignorance and the prevalence of tools that encourage such behavior.
I vote for herdes instinct, but that is a non-scientific view. We
here where i live had the "fold rear view mirrors even if not done
automatically by the car" until some young men started crashing
non-folded ones with baseball bats (say: who else would crash
rearview mirrors, i have not seen one doing this, but a lot of
damaged mirrors by then), and since a couple of years we have
"cut trees and hedges and replace with grass", which is really,
really terrible and i am not living at the same place than before.
Many birds, bats etc. think the same, so i am not alone; i hope
this ends soon. I can have a walk to reach nightingales, though,
and still. I would hope for "turning off the lights to be able to
see a true night sky", but .. i am dreaming.
|> And you seem to be using DMARC, which irritates the list-reply mechanism
|> of at least my MUA.
|
|Yes I do use DMARC as well as DKIM and SPF (w/ -all). I don't see how
|me using that causes problems with "list-reply".
|
|My working understanding is that "list-reply" should reply to the list's
|posting address in the List-Post: header.
|
|List-Post: <mailto:tuhs@minnie.tuhs.org>
|
|What am I missing or not understanding?
That is not how it works for the MUAs i know. It is an
interesting idea. And in fact it is used during the "mailing-list
address-massage" if possible. But one must or should
differentiate in between a subscribed list and a non-subscribed
list, for example. This does not work without additional
configuration (e.g., we have `mlist' and `mlsubscribe' commands to
make known mailing-lists to the machine), though List-Post: we use
for automatic configuration (as via `mlist').
And all in all this is a complicated topic (there are
Mail-Followup-To: and Reply-To:, for example), and before you say
"But what i want is a list reply!", yes, of course you are right.
But. For my thing i hope i have found a sensible way through
this, and initially also one that does not deter users of console
MUA number one (mutt).
--End of <89e5ae21-ccc0-5c84-837b-120a1a7d9e26@spamtrap.tnetconsulting.net>
Cheerio.
--steffen
|
|Der Kragenbaer, The moon bear,
|der holt sich munter he cheerfully and one by one
|einen nach dem anderen runter wa.ks himself off
|(By Robert Gernhardt)
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] off-topic list
2018-06-23 14:49 ` Steffen Nurpmeso
@ 2018-06-23 15:25 ` Toby Thain
2018-06-23 18:49 ` Grant Taylor via TUHS
1 sibling, 0 replies; 133+ messages in thread
From: Toby Thain @ 2018-06-23 15:25 UTC (permalink / raw)
To: tuhs
On 2018-06-23 10:49 AM, Steffen Nurpmeso wrote:
> Hello.
>
> Grant Taylor via TUHS wrote in <89e5ae21-ccc0-5c84-837b-120a1a7d9e26@spa\
> mtrap.tnetconsulting.net>:
> |On 06/22/2018 01:25 PM, Steffen Nurpmeso wrote:
> |> True, but possible since some time, for the thing i maintain, too,
> |> unfortunately. It will be easy starting with the next release, however.
> |
> |I just spent a few minutes looking at how to edit headers in reply
> |messages in Thunderbird and I didn't quickly find it. (I do find an
> |Add-On that allows editing messages in reader, but not the composer.)
>
> Oh, I do not know: i have never used a graphical MUA, only pine,
> then mutt, and now, while anytime before now and then anyway, BSD
> Mail. Only once i implemented RFC 2231 support ...
Like most retrocomputing lists, only one off-topic list is ever really
needed: "Email RFCs and etiquette"
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] off-topic list
2018-06-23 14:49 ` Steffen Nurpmeso
2018-06-23 15:25 ` Toby Thain
@ 2018-06-23 18:49 ` Grant Taylor via TUHS
2018-06-23 21:05 ` Tom Ivar Helbekkmo via TUHS
` (2 more replies)
1 sibling, 3 replies; 133+ messages in thread
From: Grant Taylor via TUHS @ 2018-06-23 18:49 UTC (permalink / raw)
To: tuhs
On 06/23/2018 08:49 AM, Steffen Nurpmeso wrote:
> Hello.
Hi,
> Oh, I do not know: i have never used a graphical MUA, only pine, then
> mutt, and now, while anytime before now and then anyway, BSD Mail.
I agree that text mode MUAs from before the turn of the century do have
a LOT more functionality than most GUI MUAs that came after that point
in time.
Thankfully we are free to use what ever MUA we want to. :-)
> they i think misinterpreted RFC 2231 to only allow MIME paramaters to be
> split in ten sections.
I've frequently found things that MUAs (and other applications) don't do
properly. That's when it becomes a learning game to see what subset of
the defined standard was implemented incorrectly and deciding if I want
to work around it or not.
> Graphical user interfaces are a difficult abstraction, or tedious to use.
> I have to use a graphical browser, and it is always terrible to enable
> Cookies for a site.
Cookies are their own problem as of late. All the "We use
cookies...<bla>...<bla>...<bla>...." warnings that we now seem to have
to accept get really annoying. I want a cookie, or more likely a
header, that says "I accept (first party) cookies." as a signal to not
pester me.
> For my thing i hope we will at some future day be so resilient that
> users can let go the nmh mailer without loosing any freedom.
Why do you have to let go of one tool? Why can't you use a suite of
tools that collectively do what you want?
> I mean, user interfaces are really a pain, and i think this will not
> go away until we come to that brain implant which i have no doubt will
> arrive some day, and then things may happen with a think. Things like
> emacs or Acme i can understand, and the latter is even Unix like in the
> way it works.
You can keep the brain implant. I have a desire to not have one.
> Interesting that most old a.k.a. established Unix people give up that
> Unix freedom of everything-is-a-file, that was there for email access via
> nupas -- the way i have seen it in Plan9 (i never ran a research Unix),
> at least -- in favour of a restrictive graphical user interface!
Why do you have to give up one tool to start using a different tool?
I personally use Thunderbird as my primary MUA but weekly use mutt
against the same mailbox w/ data going back 10+ years. I extensively
use Procmail to file messages into the proper folders. I recently wrote
a script that checks (copies of) messages that are being written to
folders to move a message with that Message-ID from the Inbox to the
Trash. (The point being to remove copies that I got via To: or CC: when
I get a copy from the mailing list.)
It seems to be like I'm doing things that are far beyond what
Thunderbird can do by leveraging other tools to do things for me. I
also have a handful devices checking the same mailbox.
> No, we use the same threading algorithm that Zawinski described ([1],
> "the threading algorithm that was used in Netscape Mail and News 2.0
> and 3.0"). I meant, in a threaded display, successive follow-up messages
> which belong to the same thread will not reiterate the Subject:, because
> it is the same one as before, and that is irritating.
>
> [1] http://www.jwz.org/doc/threading.html
I *LOVE* threaded views. I've been using threaded view for longer than
I can remember. I can't fathom not using threaded view.
I believe I mistook your statement to mean that you wanted to thread
based on Subject: header, not the In-Reply-To: or References: header.
>>> And you seem to be using DMARC, which irritates the list-reply mechanism
>>> of at least my MUA.
>>
>> Yes I do use DMARC as well as DKIM and SPF (w/ -all). I don't see how
>> me using that causes problems with "list-reply".
>>
>> My working understanding is that "list-reply" should reply to the list's
>> posting address in the List-Post: header.
>>
>> List-Post: <mailto:tuhs@minnie.tuhs.org>
>>
>> What am I missing or not understanding?
>
> That is not how it works for the MUAs i know. It is an interesting idea.
> And in fact it is used during the "mailing-list address-massage"
> if possible. But one must or should differentiate in between a
> subscribed list and a non-subscribed list, for example. This does
> not work without additional configuration (e.g., we have `mlist' and
> `mlsubscribe' commands to make known mailing-lists to the machine),
> though List-Post: we use for automatic configuration (as via `mlist').
>
> And all in all this is a complicated topic (there are Mail-Followup-To:
> and Reply-To:, for example), and before you say "But what i want is a
> list reply!", yes, of course you are right. But. For my thing i hope
> i have found a sensible way through this, and initially also one that
> does not deter users of console MUA number one (mutt).
How does my use of DMARC irritate the list-reply mechanism of your MUA?
DMARC is completely transparent to message contents. Sure, DKIM adds
headers with a signature. But I don't see anything about DKIM's use
that has any impact on how any MUA handles a message.
Or are you referring to the fact that some mailing lists modify the
From: header to be DMARC compliant?
Please elaborate on what you mean by "DMARC irritate the list-reply
mechanism of your MUA".
--
Grant. . . .
unix || die
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] off-topic list
2018-06-23 18:49 ` Grant Taylor via TUHS
@ 2018-06-23 21:05 ` Tom Ivar Helbekkmo via TUHS
2018-06-23 21:21 ` Michael Parson
2018-06-23 22:38 ` [TUHS] off-topic list Steffen Nurpmeso
2 siblings, 0 replies; 133+ messages in thread
From: Tom Ivar Helbekkmo via TUHS @ 2018-06-23 21:05 UTC (permalink / raw)
To: Grant Taylor via TUHS; +Cc: Grant Taylor
[-- Attachment #1: Type: text/plain, Size: 917 bytes --]
Grant Taylor via TUHS <tuhs@minnie.tuhs.org> writes:
> How does my use of DMARC irritate the list-reply mechanism of your MUA?
>
> DMARC is completely transparent to message contents. Sure, DKIM adds
> headers with a signature. But I don't see anything about DKIM's use
> that has any impact on how any MUA handles a message.
>
> Or are you referring to the fact that some mailing lists modify the
> From: header to be DMARC compliant?
>
> Please elaborate on what you mean by "DMARC irritate the list-reply
> mechanism of your MUA".
Thanks, Grant! It's always a pleasure when someone decides to follow
these things through, and I'm glad to see you doing it now (even though
I certainly wouldn't have the stamina to do it myself.
-tih
--
Most people who graduate with CS degrees don't understand the significance
of Lisp. Lisp is the most important idea in computer science. --Alan Kay
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 487 bytes --]
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] off-topic list
2018-06-23 18:49 ` Grant Taylor via TUHS
2018-06-23 21:05 ` Tom Ivar Helbekkmo via TUHS
@ 2018-06-23 21:21 ` Michael Parson
2018-06-23 23:31 ` Grant Taylor via TUHS
2018-06-25 2:53 ` Dave Horsfall
2018-06-23 22:38 ` [TUHS] off-topic list Steffen Nurpmeso
2 siblings, 2 replies; 133+ messages in thread
From: Michael Parson @ 2018-06-23 21:21 UTC (permalink / raw)
To: tuhs
On Sat, 23 Jun 2018, Grant Taylor via TUHS wrote:
<snip>
> I recently wrote a script that checks (copies of) messages that are
> being written to folders to move a message with that Message-ID from
> the Inbox to the Trash. (The point being to remove copies that I got
> via To: or CC: when I get a copy from the mailing list.)
The first rule in my .procmailrc does this with formail:
:0 Wh: msgid.lock
| /usr/local/bin/formail -D 8192 .msgid.cache
Doesn't move duplicates to the trash, it just prevents multiple copies
of the same message-id from being delivered at all. The 8192 specfies
how large of a cache (stored in ~/.msgid.cache) of message-ids to keep
track of.
--
Michael Parson
Pflugerville, TX
KF5LGQ
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] off-topic list
2018-06-23 21:21 ` Michael Parson
@ 2018-06-23 23:31 ` Grant Taylor via TUHS
2018-06-23 23:36 ` Larry McVoy
2018-06-25 2:53 ` Dave Horsfall
1 sibling, 1 reply; 133+ messages in thread
From: Grant Taylor via TUHS @ 2018-06-23 23:31 UTC (permalink / raw)
To: tuhs
On 06/23/2018 03:21 PM, Michael Parson wrote:
> The first rule in my .procmailrc does this with formail:
That works well for many. But it does not work at all for me.
There are many additional factors that I have to consider:
- I use different email addresses for different things.
- Replies come to one email address.
- Emails from the mailing list come to a different address.
- Replies are direct and arrive sooner.
- Emails from the mailing list are indirect and arrive later.
This means that the emails I get from the mailing list are to different
addresses than message that were CCed directly to me.
- I want the copy from the mailing list.
- I want to not arbitrarily toss the direct reply in case the message
never arrives from the mailing list.
> Doesn't move duplicates to the trash, it just prevents multiple copies
> of the same message-id from being delivered at all. The 8192 specfies
> how large of a cache (stored in ~/.msgid.cache) of message-ids to keep
> track of.
In light of the above facts, I can't simply reject subsequent copies of
Message-IDs as that does not accomplish the aforementioned desires.
I also want to move the existing message to the Trash folder instead of
blindly removing it so that I have the option of going to Trash and
recovering it via my MUA if I feel the need or desire to do so.
What I do is quite likely extremely atypical. But it does what I want.
I have my own esoteric reasons (some are better than others) for doing
what I do. I'm happy to share if anyone is interested.
--
Grant. . . .
unix || die
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] off-topic list
2018-06-23 23:31 ` Grant Taylor via TUHS
@ 2018-06-23 23:36 ` Larry McVoy
2018-06-23 23:37 ` Larry McVoy
0 siblings, 1 reply; 133+ messages in thread
From: Larry McVoy @ 2018-06-23 23:36 UTC (permalink / raw)
To: Grant Taylor; +Cc: tuhs
On Sat, Jun 23, 2018 at 05:31:06PM -0600, Grant Taylor via TUHS wrote:
> On 06/23/2018 03:21 PM, Michael Parson wrote:
> >The first rule in my .procmailrc does this with formail:
>
> That works well for many. But it does not work at all for me.
>
> There are many additional factors that I have to consider:
>
> - I use different email addresses for different things.
I think you misunderstood what formail is doing. It's looking
at the Message-ID header. The one for the email you sent
looks like:
Message-ID: <ea179b08-d58c-e116-ae07-cee882f306f8@spamtrap.tnetconsulting.net>
All formail is doing is keeping a recent cache of those ids and only letting
one get through.
So suppose someone reply-alls to you and the list (very common). Without
the formail trick you'll see that message twice.
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] off-topic list
2018-06-23 23:36 ` Larry McVoy
@ 2018-06-23 23:37 ` Larry McVoy
2018-06-24 0:20 ` Grant Taylor via TUHS
0 siblings, 1 reply; 133+ messages in thread
From: Larry McVoy @ 2018-06-23 23:37 UTC (permalink / raw)
To: Grant Taylor; +Cc: tuhs
Never mind, I didn't read far enough into your email, my bad.
On Sat, Jun 23, 2018 at 04:36:06PM -0700, Larry McVoy wrote:
> On Sat, Jun 23, 2018 at 05:31:06PM -0600, Grant Taylor via TUHS wrote:
> > On 06/23/2018 03:21 PM, Michael Parson wrote:
> > >The first rule in my .procmailrc does this with formail:
> >
> > That works well for many. But it does not work at all for me.
> >
> > There are many additional factors that I have to consider:
> >
> > - I use different email addresses for different things.
>
> I think you misunderstood what formail is doing. It's looking
> at the Message-ID header. The one for the email you sent
> looks like:
>
> Message-ID: <ea179b08-d58c-e116-ae07-cee882f306f8@spamtrap.tnetconsulting.net>
>
> All formail is doing is keeping a recent cache of those ids and only letting
> one get through.
>
> So suppose someone reply-alls to you and the list (very common). Without
> the formail trick you'll see that message twice.
--
---
Larry McVoy lm at mcvoy.com http://www.mcvoy.com/lm
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] off-topic list
2018-06-23 21:21 ` Michael Parson
2018-06-23 23:31 ` Grant Taylor via TUHS
@ 2018-06-25 2:53 ` Dave Horsfall
2018-06-25 5:40 ` Grant Taylor via TUHS
2018-06-25 6:15 ` arnold
1 sibling, 2 replies; 133+ messages in thread
From: Dave Horsfall @ 2018-06-25 2:53 UTC (permalink / raw)
To: The Eunuchs Hysterical Society
On Sat, 23 Jun 2018, Michael Parson wrote:
> The first rule in my .procmailrc does this with formail:
Anyone with any concept of security will not be running Procmail; it's not
even supported by its author any more, due to its opaque syntax and likely
vulnerabilities (it believes user-supplied headers and runs shell commands
based upon them).
-- Dave VK2KFU
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] off-topic list
2018-06-25 2:53 ` Dave Horsfall
@ 2018-06-25 5:40 ` Grant Taylor via TUHS
2018-06-25 6:15 ` arnold
1 sibling, 0 replies; 133+ messages in thread
From: Grant Taylor via TUHS @ 2018-06-25 5:40 UTC (permalink / raw)
To: tuhs
On 06/24/2018 08:53 PM, Dave Horsfall wrote:
> Anyone with any concept of security will not be running Procmail;
I'm going to have to throw a flag and cry foul on that play.
1) "Anyone (with)" is a rather large group.
2) "any concept of security" is a rather large (sub)group.
3) "will not" is rather absolute.
I do believe that I have a better concept of security than many (but not
all) of my colleagues.
- I've got leading (if not bleeding) edge email security.
- I've got full disk encryption on multiple server and workstations.
- I use encrypted email when ever I can.
- I play with 802.1ae MACsec (encrypted Ethernet frames).
- I use salted hashes in proof of concepts.
- I advocate for proper use of sudo...
- ...and go out of my way to educate others on how to use sudo properly.
I could go on, but you probably don't care. In short, I believe I fall
squarely in categories #1 and #2.
Seeing as how I run procmail I invalidate #3.
So, I ask that you retract or amend your statement. Or at least admit
it's (partial) inaccuracies.
> it's not even supported by its author any more,
Many of the software packages that TUHS subscribers run on physical and
/ or virtual systems are not supported by their authors any more. Some
of them are directly connected to the Internet too.
How many copies if (Open)VMS are running on (virtual) VAX (emulators)?
Don't like (Open)VMS, then how about ancient versions of BSD or AT&T SYS
V? How many people are running wide array ancient BBSs on as many
platforms?
How many people in corporate offices are running software that went End
of Support 18 months ago?
Lack of support does not make something useless.
> due to its opaque syntax
I'm not aware of Procmail ever having claimed to have simple syntax. I
also believe that Procmail is not alone in this.
m4 is known for being obtuse, as is Sendmail, both of which are still
used too. SQL is notorious for being finicky. I think there's a lot of
C and C++ code that can fall in the same category. (LISP … enough said)
> and likely vulnerabilities
Everything has vulnerabilities. It's about how risky the (known)
vulnerabilities are, and how likely they are to be exploited. It's a
balancing act. Every administrator (or company directing said
administrator) performs a risk assessment and makes a decision.
> (it believes user-supplied headers
Does the latest and greatest SMTP server from Google believe the
information that the user supplies to it? What about the Nginx web
server that seems to be in vogue, does it believe the verb, URL, HTTP
version and Host: header that users supply?
Does Mailman that hosts the TUHS mailing list believe the information
that minnie provides that was originally user supplied?
Does your web browser believe and act on the information that the web
server you are connecting to provided?
Applications are designed to trust the information that is provided to
them. Sure, run some sanity checks on it. But ultimately it's the job
of software to act on the user supplied information.
> and runs shell commands based upon them).
I've seen exceedingly few procmail recipes that actually run shell
commands. Almost all of the procmail recipes that I've seen and use do
a very simple match for fixed text and file the message away in a
folder. These are functions built into procmail and NOT shell commands.
The very few procmail recipes that I've seen that do run shell commands
are passing the message into STDIN of another utility that is itself
designed to accept user supplied data, ideally in a safe way.
So, I believe your statement, "Anyone with any concept of security will
not be running Procmail", is false, literally from word one.
If you want to poke fun at something, take a look at SpamAssassin and
it's Perl. Both of which are still actively supported. Or how about
all the NPM stuff that people are using in web page that are being
pulled from places that God only knows.
--
Grant. . . .
unix || die
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] off-topic list
2018-06-25 2:53 ` Dave Horsfall
2018-06-25 5:40 ` Grant Taylor via TUHS
@ 2018-06-25 6:15 ` arnold
2018-06-25 7:27 ` Bakul Shah
` (4 more replies)
1 sibling, 5 replies; 133+ messages in thread
From: arnold @ 2018-06-25 6:15 UTC (permalink / raw)
To: tuhs, dave
Dave Horsfall <dave@horsfall.org> wrote:
> On Sat, 23 Jun 2018, Michael Parson wrote:
>
> > The first rule in my .procmailrc does this with formail:
>
> Anyone with any concept of security will not be running Procmail; it's not
> even supported by its author any more, due to its opaque syntax and likely
> vulnerabilities (it believes user-supplied headers and runs shell commands
> based upon them).
>
> -- Dave VK2KFU
So what is the alternative? I've been using it for years with
a pretty static setup to route incoming mail to different places.
I need *something* to do what it does.
Thanks,
Arnold
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] off-topic list
2018-06-25 6:15 ` arnold
@ 2018-06-25 7:27 ` Bakul Shah
2018-06-25 12:52 ` Michael Parson
` (3 subsequent siblings)
4 siblings, 0 replies; 133+ messages in thread
From: Bakul Shah @ 2018-06-25 7:27 UTC (permalink / raw)
To: arnold; +Cc: tuhs
On Mon, 25 Jun 2018 00:15:42 -0600 arnold@skeeve.com wrote:
>
> > On Sat, 23 Jun 2018, Michael Parson wrote:
> >
> > > The first rule in my .procmailrc does this with formail:
> >
> > Anyone with any concept of security will not be running Procmail; it's not
> > even supported by its author any more, due to its opaque syntax and likely
> > vulnerabilities (it believes user-supplied headers and runs shell commands
> > based upon them).
> >
> > -- Dave VK2KFU
>
> So what is the alternative? I've been using it for years with
> a pretty static setup to route incoming mail to different places.
> I need *something* to do what it does.
My crude method has worked better than anything else for me.
[in used for over two decades]
As I read only a subset of messages from mailing lists, if I
directly filed such messages into their own folders, I would
either have to waste more time scanning much larger mail
folders &/or miss paying attention to some messages even
once[1].
Fortunately, in MH one can use named sequences (that map to
set of picked messages). In essence, I use sequences as "work
space" and other folders as storage space.
For example
$ <run spam filtering script>
$ pick -seq me -to bakul -or -cc bakul -or -bcc bakul
$ pick -seq tuhs -to tuhs@tuhs -or -cc tuhs@tuhs
...
When I have some idle time, I type
$ inc # to incorporate new messages into inbox
$ pickall # my script for creating sequences
Next I scan these sequences in a priority order to see if
anything seems interesting and then process these messages.
Once done, I file them into their own folders and move on to
the next sequence. The whole process takes a few minutes at
most[2] and at the end the inbox is "zeroed"! By zeroing it
each time, I ensure that the next time I will be processing
only new messages, and typically spend less than a second per
message summary line.
[1] This happens to me on Apple Mail.
[2] Unless I decide to reply!
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] off-topic list
2018-06-25 6:15 ` arnold
2018-06-25 7:27 ` Bakul Shah
@ 2018-06-25 12:52 ` Michael Parson
2018-06-25 13:41 ` arnold
2018-06-25 13:59 ` Adam Sampson
` (2 subsequent siblings)
4 siblings, 1 reply; 133+ messages in thread
From: Michael Parson @ 2018-06-25 12:52 UTC (permalink / raw)
To: tuhs
On 2018-06-25 01:15, arnold@skeeve.com wrote:
> Dave Horsfall <dave@horsfall.org> wrote:
>
>> On Sat, 23 Jun 2018, Michael Parson wrote:
>>
>> > The first rule in my .procmailrc does this with formail:
>>
>> Anyone with any concept of security will not be running Procmail; it's
>> not
>> even supported by its author any more, due to its opaque syntax and
>> likely
>> vulnerabilities (it believes user-supplied headers and runs shell
>> commands
>> based upon them).
>>
>> -- Dave VK2KFU
>
> So what is the alternative? I've been using it for years with
> a pretty static setup to route incoming mail to different places.
> I need *something* to do what it does.
Sieve[0] is what I've seen suggested a bit. I've skimmed over the docs
some, but haven't invested the time in figuring out how much effort
would be involved in replacing procmail with it, at first glance, it
does not seem to be a drop-in replacement.
--
Michael Parson
Pflugerville, TX
KF5LGQ
[0] http://sieve.info/
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] off-topic list
2018-06-25 12:52 ` Michael Parson
@ 2018-06-25 13:41 ` arnold
2018-06-25 13:56 ` arnold
0 siblings, 1 reply; 133+ messages in thread
From: arnold @ 2018-06-25 13:41 UTC (permalink / raw)
To: tuhs, mparson
Michael Parson <mparson@bl.org> wrote:
> On 2018-06-25 01:15, arnold@skeeve.com wrote:
> > So what is the alternative? I've been using it [procmail] for years with
> > a pretty static setup to route incoming mail to different places.
> > I need *something* to do what it does.
>
> Sieve[0] is what I've seen suggested a bit. I've skimmed over the docs
> some, but haven't invested the time in figuring out how much effort
> would be involved in replacing procmail with it, at first glance, it
> does not seem to be a drop-in replacement.
>
> --
> Michael Parson
> Pflugerville, TX
> KF5LGQ
>
> [0] http://sieve.info/
Thanks for the answer. This looks medium complicated.
It's starting to feel like I could do this myself in my favorite
programming language (gawk) simply by writing a library to parse
the headers and then writing awk code to check things and send emails
to the right place.
If I didn't already have more side projects than I can curently handle ...
Thanks,
Arnold
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] off-topic list
2018-06-25 6:15 ` arnold
2018-06-25 7:27 ` Bakul Shah
2018-06-25 12:52 ` Michael Parson
@ 2018-06-25 13:59 ` Adam Sampson
2018-06-25 15:05 ` Grant Taylor via TUHS
2018-06-26 9:05 ` Derek Fawcus
4 siblings, 0 replies; 133+ messages in thread
From: Adam Sampson @ 2018-06-25 13:59 UTC (permalink / raw)
To: tuhs
arnold@skeeve.com writes:
> So what is the alternative? I've been using it for years with
> a pretty static setup to route incoming mail to different places.
> I need *something* to do what it does.
I use maildrop from the Courier project, which does pretty much the same
job as procmail, with a C-ish filtering language:
http://www.courier-mta.org/maildrop/
I switched over from procmail to maildrop in 2004 and it's worked nicely
for me. There are also a couple of different filtering languages that
Exim understands, if that's your MTA of choice.
--
Adam Sampson <ats@offog.org> <http://offog.org/>
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] off-topic list
2018-06-25 6:15 ` arnold
` (2 preceding siblings ...)
2018-06-25 13:59 ` Adam Sampson
@ 2018-06-25 15:05 ` Grant Taylor via TUHS
2018-06-26 9:05 ` Derek Fawcus
4 siblings, 0 replies; 133+ messages in thread
From: Grant Taylor via TUHS @ 2018-06-25 15:05 UTC (permalink / raw)
To: tuhs
[-- Attachment #1: Type: text/plain, Size: 1750 bytes --]
On 06/25/2018 12:15 AM, arnold@skeeve.com wrote:
> So what is the alternative? I've been using it for years with a
> pretty static setup to route incoming mail to different places. I need
> *something* to do what it does.
As others have pointed out, Sieve and Maildrop are a couple of options.
I've looked at both of them a few times, somewhat in earnest when I last
built my mail server, but didn't feel that they would be drop in
replacements for my existing LDA needs.
Perhaps my limitations are my own making. I use Maildir and want an LDA
that is compatible with that. At (currently) 453 procmail recipes, I'd
like something that is closer to a drop in replacement. I will rewrite
things if I must, but I'd rather not if it's not required.
I think one of the reasons the LDA space is languishing is that many
mail servers seem to migrating to mail stores that are more centralized
/ virtualized under one unix account, and they come with their own
purpose built LDA.
The other option that Arnold mentioned, writing your own LDA, seems
somewhat unpleasant to me. Can it be done, absolutely. I'm afraid that
anything I would produce would likely share similar security problems
that Procmail may very well have. Which leaves me wondering, am I
better off using a tool with a questionable security posture and others
looking at it and trying to abuse it -or- my own tool with a completely
unknown security posture that nobody else is looking at. Thus I choose
the lesser of the evils and continue to use Procmail.
Thanks to Michael and Adam for mentioning Sieve and Maildrop
(respectively) while I was too lazy to comment on my phone in bed.
--
Grant. . . .
unix || die
[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 3982 bytes --]
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] off-topic list
2018-06-25 6:15 ` arnold
` (3 preceding siblings ...)
2018-06-25 15:05 ` Grant Taylor via TUHS
@ 2018-06-26 9:05 ` Derek Fawcus
2018-06-28 14:25 ` [TUHS] email filtering (was Re: off-topic list) Perry E. Metzger
4 siblings, 1 reply; 133+ messages in thread
From: Derek Fawcus @ 2018-06-26 9:05 UTC (permalink / raw)
To: tuhs
On Mon, Jun 25, 2018 at 12:15:42AM -0600, arnold@skeeve.com wrote:
>
> So what is the alternative?
Well, I've generally found that > 90% of my mail filtering needs
can be handled by the use of extension addresses (see my From: line).
I started doing that with qmail back in the 90s, and these days
tend to use a similar mechanism which is present in postfix.
I've even found that some sendmail installs (as a previous employer used)
support as similar mechanism.
It is only occasionally that I have to resort to procmail (say for
parsing stuff out of Atlassian tools messages), and can then make
its use simpler by having recipices which match a bit, then feed
the message back in to a extension address. At which point delivery
may occur, or some other procmail rules may be run.
DF
^ permalink raw reply [flat|nested] 133+ messages in thread
* [TUHS] email filtering (was Re: off-topic list)
2018-06-26 9:05 ` Derek Fawcus
@ 2018-06-28 14:25 ` Perry E. Metzger
0 siblings, 0 replies; 133+ messages in thread
From: Perry E. Metzger @ 2018-06-28 14:25 UTC (permalink / raw)
To: tuhs
On Mon, Jun 25, 2018 at 12:15:42AM -0600, arnold@skeeve.com wrote:
>
> So what is the alternative?
I subscribe to a truly ridiculous number of email lists, and I run my
own mail server, so I thought I'd mention my Rube Goldberg style setup
given that it's being discussed. For those that don't like how long
this is, the main keys are "IMAP + sieve for filing all mailing lists
into their own IMAP folder". The rest is pretty boring.
Boring part:
0) My MTA is Postfix, which allows even a relatively small site to get
pretty damn good anti-spam capabilities. It's also really well written
(it's a flock of small tools each of which does only one thing well)
and quite secure, which is important. I have most of the interesting
bells and whistles (like opportunistic TLS) turned on.
1) Email for me goes through procmail to run it through a logging
system, then spam assassin to tag whatever spam got past Postfix, and
finally it's passed to Dovecot's "sieve" system for final filing in
the IMAP server.
2) Sieve breaks up my incoming email into many, many
mailboxes. There's a box for every mailing list I'm on, so I can read
them more or less like newsgroups. I don't put any email for a list
into my main inbox, which gets only mail addressed to me personally so
I can reply to such messages very fast.
Sieve is a nice, standardized language for mailbox filtering, and the
Dovecot implementation works quite well. It's flexible enough that I
can deal with the fact that many mailing lists are hard to pick out
from the mail stream thanks to their operators not following standards.
Spam that made it past Postfix into Spam Assassin might or might not
really be spam, so I have Spam Assassin tag it rather than deleting
it, and Sieve is instructed to file that into its own box just in
case. A cron job expires spam after a couple of weeks so that mailbox
doesn't get too large.
3) As I hinted, my email is stored in an IMAP server, specifically
Dovecot. This allows me to read my mail with a dozen different tools,
including my phone, a couple different MUAs on the desktop, etc. I
dislike (see below) that this limits my choice in tools to process the
mail, but I really need the multi-MUA setup, and I can't afford to
move the mail onto any of my diverse end systems because I need to be
able to read it on all of them.
4) I used MH for many years, and I really wish MH/NMH worked with IMAP
properly, because like others here, I really loved its scriptability,
but the need to use multiple MUAs trumps it at this point. It's also
necessary to communicate with muggles/mundanes/ordinary folk too often
and most of them use HTML email and so I need to be able to read it
even if I don't generate it, which means some use of GUI emailers.
5) As for the future, I'm hoping at some point to have something that
works sort of like MH did but which can speak native IMAP, and I'm
hoping to start using Emacs as an MUA most of the time if I can find a
client that will handle HTML email + IMAP a bit better than most of
them do. (Stop looking at me that way for saying Emacs — I've been
using it as my editor since 1983 and my fingers just don't know any
better any more.)
Perry
--
Perry E. Metzger perry@piermont.com
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] off-topic list
2018-06-23 18:49 ` Grant Taylor via TUHS
2018-06-23 21:05 ` Tom Ivar Helbekkmo via TUHS
2018-06-23 21:21 ` Michael Parson
@ 2018-06-23 22:38 ` Steffen Nurpmeso
2018-06-24 0:18 ` Grant Taylor via TUHS
2 siblings, 1 reply; 133+ messages in thread
From: Steffen Nurpmeso @ 2018-06-23 22:38 UTC (permalink / raw)
To: Grant Taylor via TUHS
Grant Taylor via TUHS wrote in <ce6f617c-cf8e-63c6-8186-27e09c78020c@spa\
mtrap.tnetconsulting.net>:
|On 06/23/2018 08:49 AM, Steffen Nurpmeso wrote:
|> Oh, I do not know: i have never used a graphical MUA, only pine, then
|> mutt, and now, while anytime before now and then anyway, BSD Mail.
|
|I agree that text mode MUAs from before the turn of the century do have
|a LOT more functionality than most GUI MUAs that came after that point
|in time.
|
|Thankfully we are free to use what ever MUA we want to. :-)
Absolutely true. And hoping that, different to web browsers, no
let me call it pseudo feature race is started that results in less
diversity instead of anything else.
|> they i think misinterpreted RFC 2231 to only allow MIME paramaters to be
|> split in ten sections.
|
|I've frequently found things that MUAs (and other applications) don't do
|properly. That's when it becomes a learning game to see what subset of
|the defined standard was implemented incorrectly and deciding if I want
|to work around it or not.
If there is the freedom for a decision. That is how it goes, yes.
For example one may have renamed "The Ethiopian Jewish Exodus --
Narratives of the Migration Journey to Israel 1977 - 1985" to
"Falasha" in order to use the book as an email attachment. Which
would be pretty disappointing. But it is also nice to see that
there are standards which were not fully thought through and
required backward incompatible hacks to fill the gaps. Others
like DNS are pretty perfect and scale fantastic. Thus.
|> Graphical user interfaces are a difficult abstraction, or tedious \
|> to use.
|> I have to use a graphical browser, and it is always terrible to enable
|> Cookies for a site.
|
|Cookies are their own problem as of late. All the "We use
|cookies...<bla>...<bla>...<bla>...." warnings that we now seem to have
|to accept get really annoying. I want a cookie, or more likely a
|header, that says "I accept (first party) cookies." as a signal to not
|pester me.
Ah, it has become a real pain. It is ever so astounding how much
HTML5 specifics, scripting and third party analytics can be
involved for a page that would have looked better twenty years
ago, or say whenever CSS 2.0 came around. Today almost each and
every commercial site i visit is in practice a denial of service
attack. For me and my little box at least, and without gaining
any benefit from that!
The thing is, i do not want be lied to, but what those boxes you
refer to say are often plain lies. It has nothing to do with my
security, or performance boosts, or whatever ... maximally so in
a juristically unobjectionable way. Very disappointing.
|> For my thing i hope we will at some future day be so resilient that
|> users can let go the nmh mailer without loosing any freedom.
|
|Why do you have to let go of one tool? Why can't you use a suite of
|tools that collectively do what you want?
It is just me and my desire to drive this old thing forward so
that it finally will be usable for something real. Or what i deem
for that. I actually have no idea of nmh, but i for one think the
sentence of the old BSD Mail manual stating it is an "intelligent
mail processing system, which has a command syntax reminiscent of
ed(1) with lines replaced by messages" has always been a bit
excessive. And the fork i maintain additionally said it "is also
usable as a mail batch language, both for sending and receiving
mail", adding onto that.
|> I mean, user interfaces are really a pain, and i think this will not
|> go away until we come to that brain implant which i have no doubt will
|> arrive some day, and then things may happen with a think. Things like
|> emacs or Acme i can understand, and the latter is even Unix like in the
|> way it works.
|
|You can keep the brain implant. I have a desire to not have one.
Me too, me too. Oh, me too. Unforgotten those american black and
white films with all those sneaky alien attacks, and i was
a teenager when i saw them. Yeah, but in respect to that i am
afraid the real rabble behind the real implants will be even
worse, and drive General Motors or something. ^.^ I have no
idea.. No, but.. i really do not think future humans will get
around that, just imagine automatic 911 / 112, biologic anomaly
detection, and then you do not need to learn latin vocabulary but
can buy the entire set for 99.95$, and have time for learning
something real important. For example.
It is all right, just be careful and ensure that not all police
men go to the tourist agency to book a holiday in Texas at the
very same time. I think this is manageable.
|> Interesting that most old a.k.a. established Unix people give up that
|> Unix freedom of everything-is-a-file, that was there for email access \
|> via
|> nupas -- the way i have seen it in Plan9 (i never ran a research Unix),
|> at least -- in favour of a restrictive graphical user interface!
|
|Why do you have to give up one tool to start using a different tool?
|
|I personally use Thunderbird as my primary MUA but weekly use mutt
|against the same mailbox w/ data going back 10+ years. I extensively
|use Procmail to file messages into the proper folders. I recently wrote
|a script that checks (copies of) messages that are being written to
|folders to move a message with that Message-ID from the Inbox to the
|Trash. (The point being to remove copies that I got via To: or CC: when
|I get a copy from the mailing list.)
|
|It seems to be like I'm doing things that are far beyond what
|Thunderbird can do by leveraging other tools to do things for me. I
|also have a handful devices checking the same mailbox.
You say it. In the research Unix nupas mail (as i know it from
Plan9) all those things could have been done with a shell script
and standard tools like grep(1) and such. Even Thunderbird would
simply be a maybe even little nice graphical application for
display purposes. The actual understanding of storage format and
email standards would lie solely within the provider of the file
system. Now you use several programs which all ship with all the
knowledge. I for one just want my thing to be easy and reliable
controllable via a shell script. You could replace procmail
(which is i think perl and needs quite some perl modules) with
a monolithic possibly statically linked C program. Then. With
full error checking etc. This is a long road ahead, for my thing.
|> No, we use the same threading algorithm that Zawinski described ([1],
|> "the threading algorithm that was used in Netscape Mail and News 2.0
|> and 3.0"). I meant, in a threaded display, successive follow-up \
|> messages
|> which belong to the same thread will not reiterate the Subject:, because
|> it is the same one as before, and that is irritating.
|>
|> [1] http://www.jwz.org/doc/threading.html
|
|I *LOVE* threaded views. I've been using threaded view for longer than
|I can remember. I can't fathom not using threaded view.
|
|I believe I mistook your statement to mean that you wanted to thread
|based on Subject: header, not the In-Reply-To: or References: header.
It seems you did not have a chance not do. My fault, sorry.
|>>> And you seem to be using DMARC, which irritates the list-reply \
|>>> mechanism
|>>> of at least my MUA.
|>>
|>> Yes I do use DMARC as well as DKIM and SPF (w/ -all). I don't see how
|>> me using that causes problems with "list-reply".
|>>
|>> My working understanding is that "list-reply" should reply to the list's
|>> posting address in the List-Post: header.
|>>
|>> List-Post: <mailto:tuhs@minnie.tuhs.org>
|>>
|>> What am I missing or not understanding?
|>
|> That is not how it works for the MUAs i know. It is an interesting \
|> idea.
|> And in fact it is used during the "mailing-list address-massage"
|> if possible. But one must or should differentiate in between a
|> subscribed list and a non-subscribed list, for example. This does
|> not work without additional configuration (e.g., we have `mlist' and
|> `mlsubscribe' commands to make known mailing-lists to the machine),
|> though List-Post: we use for automatic configuration (as via `mlist').
|>
|> And all in all this is a complicated topic (there are Mail-Followup-To:
|> and Reply-To:, for example), and before you say "But what i want is a
|> list reply!", yes, of course you are right. But. For my thing i hope
|> i have found a sensible way through this, and initially also one that
|> does not deter users of console MUA number one (mutt).
|
|How does my use of DMARC irritate the list-reply mechanism of your MUA?
So ok, it does not, actually. It chooses your "Grant Taylor via
TUHS" which ships with the TUHS address, so one may even see this
as an improvement to DMARC-less list replies, which would go to
TUHS, with or without the "The Unix Heritage Society".
I am maybe irritated by the 'dkim=fail reason="signature
verification failed"' your messages produce. It would not be good
to filter out failing DKIMs, at least on TUHS.
|DMARC is completely transparent to message contents. Sure, DKIM adds
|headers with a signature. But I don't see anything about DKIM's use
|that has any impact on how any MUA handles a message.
|
|Or are you referring to the fact that some mailing lists modify the
|From: header to be DMARC compliant?
Ach, it has been said without much thought. Maybe yes. There is
a Reply-To: of yours.
|Please elaborate on what you mean by "DMARC irritate the list-reply
|mechanism of your MUA".
My evil subconsciousness maybe does not like the MARC and DKIM
standards. That could be it. I do not know anything better,
maybe the next OpenPGP and S/MIME standards will include
a checksum of some headers in signed-only data, will specify that
some headers have to be repeated in the MIME part and extend the
data to-be-verified to those, whatever. I have not read the
OpenPGP draft nor anyting S/MIMEish after RFC 5751.
A nice sunday i wish.
--steffen
|
|Der Kragenbaer, The moon bear,
|der holt sich munter he cheerfully and one by one
|einen nach dem anderen runter wa.ks himself off
|(By Robert Gernhardt)
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] off-topic list
2018-06-23 22:38 ` [TUHS] off-topic list Steffen Nurpmeso
@ 2018-06-24 0:18 ` Grant Taylor via TUHS
2018-06-24 10:04 ` Michael Kjörling
` (3 more replies)
0 siblings, 4 replies; 133+ messages in thread
From: Grant Taylor via TUHS @ 2018-06-24 0:18 UTC (permalink / raw)
To: tuhs
On 06/23/2018 04:38 PM, Steffen Nurpmeso wrote:
> Absolutely true. And hoping that, different to web browsers, no let me
> call it pseudo feature race is started that results in less diversity
> instead of anything else.
I'm not sure I follow. I feel like we do have the choice of MUAs or web
browsers. Sadly some choices are lacking compared to other choices.
IMHO the maturity of some choices and lack there of in other choices
does not mean that we don't have choices.
> If there is the freedom for a decision. That is how it goes, yes. For
> example one may have renamed "The Ethiopian Jewish Exodus -- Narratives
> of the Migration Journey to Israel 1977 - 1985" to "Falasha" in order to
> use the book as an email attachment. Which would be pretty disappointing.
I don't follow what you're getting at.
> But it is also nice to see that there are standards which were not fully
> thought through and required backward incompatible hacks to fill the gaps.
Why is that a "nice" thing?
> Others like DNS are pretty perfect and scale fantastic. Thus.
Yet I frequently see DNS problems for various reasons. Not the least of
which is that many clients do not gracefully fall back to the secondary
DNS server when the primary is unavailable.
> Ah, it has become a real pain. It is ever so astounding how much HTML5
> specifics, scripting and third party analytics can be involved for a page
> that would have looked better twenty years ago, or say whenever CSS 2.0
> came around.
I'm going to disagree with you there. IMHO the standard is completely
separate with what people do with it.
> Today almost each and every commercial site i visit is in practice
> a denial of service attack. For me and my little box at least, and
> without gaining any benefit from that!
I believe those webmasters have made MANY TERRIBLE choices and have
ended up with the bloat that they now have. - I do not blame the
technology. I blame what the webmasters have done with the technology.
Too many people do not know what they are doing and load "yet another
module" to do something that they can already do with what's already
loaded on their page. But they don't know that. They just glue things
togehter until they work. Even if that means that they are loading the
same thing multiple times because multiple of their 3rd party components
loads a common module themselves. This is particularly pernicious with
JavaScript.
> The thing is, i do not want be lied to, but what those boxes you refer
> to say are often plain lies. It has nothing to do with my security,
> or performance boosts, or whatever ... maximally so in a juristically
> unobjectionable way. Very disappointing.
I don't understand what you're saying.
Who is lying to you? How and where are they lying to you?
What boxes are you saying I'm referring to?
> It is just me and my desire to drive this old thing forward so that it
> finally will be usable for something real. Or what i deem for that.
I don't see any reason that you can't continue to use and improve what
ever tool you want to.
> I actually have no idea of nmh, but i for one think the sentence of
> the old BSD Mail manual stating it is an "intelligent mail processing
> system, which has a command syntax reminiscent of ed(1) with lines
> replaced by messages" has always been a bit excessive. And the fork i
> maintain additionally said it "is also usable as a mail batch language,
> both for sending and receiving mail", adding onto that.
What little I know about the MH type mail stores and associated
utilities are indeed quite powerful. I think they operate under the
premise that each message is it's own file and that you work in
something akin to a shell if not your actual OS shell. I think the MH
commands are quite literally unix command that can be called from the
unix shell. I think this is in the spirit of simply enhancing the shell
to seem as if it has email abilities via the MH commands. Use any
traditional unix text processing utilities you want to manipulate email.
MH has always been attractive to me, but I've never used it myself.
> You say it. In the research Unix nupas mail (as i know it from Plan9)
> all those things could have been done with a shell script and standard
> tools like grep(1) and such.
I do say that and I do use grep, sed, awk, formail, procmail, cp, mv,
and any number of traditional unix file / text manipulation utilities on
my email weekly. I do this both with the Maildir (which is quite
similar to MH) on my mail server and to the Thumderbird message store
that is itself a variant of Maildir with a file per message in a
standard directory structure.
> Even Thunderbird would simply be a maybe even little nice graphical
> application for display purposes.
The way that I use Thunderbird, that's exactly what it is. A friendly
and convenient GUI front end to access my email.
> The actual understanding of storage format and email standards would
> lie solely within the provider of the file system.
The emails are stored in discrete files that themselves are effectively
mbox files with one message therein.
> Now you use several programs which all ship with all the knowledge.
I suppose if you count greping for a line in a text file as knowledge of
the format, okay.
egrep "^Subject: " message.txt
There's nothing special about that. It's a text file with a line that
looks like this:
Subject: Re: [TUHS] off-topic list
> I for one just want my thing to be easy and reliable controllable via
> a shell script.
That's a laudable goal. I think MH is very conducive to doing that.
> You could replace procmail (which is i think perl and needs quite some
> perl modules) with a monolithic possibly statically linked C program.
I'm about 95% certain that procmail is it's own monolithic C program.
I've never heard of any reference to Perl in association with procmail.
Are you perhaps thinking of a different local delivery agent?
> Then. With full error checking etc. This is a long road ahead, for
> my thing.
Good luck to you.
> So ok, it does not, actually. It chooses your "Grant Taylor via TUHS"
> which ships with the TUHS address, so one may even see this as an
> improvement to DMARC-less list replies, which would go to TUHS, with or
> without the "The Unix Heritage Society".
Please understand, that's not how I send the emails. I send them with
my name and my email address. The TUHS mailing list modifies them.
Aside: I support the modification that it is making.
> I am maybe irritated by the 'dkim=fail reason="signature verification
> failed"' your messages produce. It would not be good to filter out
> failing DKIMs, at least on TUHS.
Okay. That is /an/ issue. But I believe it's not /my/ issue to solve.
My server DKIM signs messages that it sends out. From everything that
I've seen and tested (and I actively look for problems) the DKIM
signatures are valid and perfectly fine.
That being said, the TUHS mailing list modifies message in the following
ways:
1) Modifies the From: when the sending domain uses DMARC.
2) Modifies the Subject to prepend "[TUHS] ".
3) Modifies the body to append a footer.
All three of these actions modify the data that receiving DKIM filters
calculate hashes based on. Since the data changed, obviously the hash
will be different.
I do not fault THUS for this.
But I do wish that TUHS stripped DKIM and associated headers of messages
going into the mailing list. By doing that, there would be no data to
compare to that wouldn't match.
I think it would be even better if TUHS would DKIM sign messages as they
leave the mailing list's mail server.
> Ach, it has been said without much thought. Maybe yes. There is a
> Reply-To: of yours.
I do see the Reply-To: header that you're referring to. I don't know
where it's coming from. I am not sending it. I just confirmed that
it's not in my MUA's config, nor is it in my sent Items.
> My evil subconsciousness maybe does not like the MARC and DKIM standards.
> That could be it. I do not know anything better, maybe the next
> OpenPGP and S/MIME standards will include a checksum of some headers
> in signed-only data, will specify that some headers have to be repeated
> in the MIME part and extend the data to-be-verified to those, whatever.
> I have not read the OpenPGP draft nor anyting S/MIMEish after RFC 5751.
I don't know if PGP or S/MIME will ever mandate anything about headers
which are structurally outside of their domain.
I would like to see an option in MUAs that support encrypted email for
something like the following:
Subject: (Subject in encrypted body.)
Where the encrypted body included a header like the following:
Encrypted-Subject: Re: [TUHS] off-topic list
I think that MUAs could then display the subject that was decrypted out
of the encrypted body.
> A nice sunday i wish.
You too.
--
Grant. . . .
unix || die
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] off-topic list
2018-06-24 0:18 ` Grant Taylor via TUHS
@ 2018-06-24 10:04 ` Michael Kjörling
2018-06-25 16:10 ` Steffen Nurpmeso
2018-06-25 0:43 ` [TUHS] mail (Re: " Bakul Shah
` (2 subsequent siblings)
3 siblings, 1 reply; 133+ messages in thread
From: Michael Kjörling @ 2018-06-24 10:04 UTC (permalink / raw)
To: tuhs
On 23 Jun 2018 18:18 -0600, from tuhs@minnie.tuhs.org (Grant Taylor via TUHS):
>> Now you use several programs which all ship with all the knowledge.
>
> I suppose if you count greping for a line in a text file as
> knowledge of the format, okay.
>
> egrep "^Subject: " message.txt
>
> There's nothing special about that. It's a text file with a line
> that looks like this:
>
> Subject: Re: [TUHS] off-topic list
The problem, of course (and I hope this is keeping this Unixy enough),
with that approach is that it won't handle headers split across
multiple lines (I'm looking at you, Received:, but you aren't alone),
and that it'll match lines in the body of the message as well (such as
the "Subject: " line in the body of your message), unless the body
happens to be e.g. Base64 encoded which instead complicates searching
for non-header material.
For sure neither is insurmountable even with standard tools, but it
does require a bit more complexity than a simple egrep to properly
parse even a single message, let alone a combination of multiple ones
(as seen in mbox mailboxes, for example). At that point having
specific tools, such as formail, that understand the specific file
format does start to make sense...
There isn't really much conceptual difference between writing, say,
formail -X Subject: < message.txt
and
egrep "^Subject: " message.txt
but the way the former handles certain edge cases is definitely better
than that of the latter.
Make everything as simple as possible, but not simpler. (That goes for
web pages, too, by the way.)
> But I do wish that TUHS stripped DKIM and associated headers of
> messages going into the mailing list. By doing that, there would be
> no data to compare to that wouldn't match.
>
> I think it would be even better if TUHS would DKIM sign messages as
> they leave the mailing list's mail server.
I believe the correct way would indeed be to validate, strip and
possibly re-sign. That way, everyone would (should) be making correct
claims about a message's origin.
FWIW, SPF presents a similar problem with message forwarding without
address rewriting... so it's definitely not just DKIM.
--
Michael Kjörling • https://michael.kjorling.se • michael@kjorling.se
“The most dangerous thought that you can have as a creative person
is to think you know what you’re doing.” (Bret Victor)
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] off-topic list
2018-06-24 10:04 ` Michael Kjörling
@ 2018-06-25 16:10 ` Steffen Nurpmeso
2018-06-25 18:48 ` Grant Taylor via TUHS
0 siblings, 1 reply; 133+ messages in thread
From: Steffen Nurpmeso @ 2018-06-25 16:10 UTC (permalink / raw)
To: tuhs
Michael Kjörling wrote in <20180624100438.GY10129@h-174-65.A328.priv.ba\
hnhof.se>:
|On 23 Jun 2018 18:18 -0600, from tuhs@minnie.tuhs.org (Grant Taylor \
|via TUHS):
|>> Now you use several programs which all ship with all the knowledge.
|>
|> I suppose if you count greping for a line in a text file as
|> knowledge of the format, okay.
|>
|> egrep "^Subject: " message.txt
|>
|> There's nothing special about that. It's a text file with a line
|> that looks like this:
|>
|> Subject: Re: [TUHS] off-topic list
|
|The problem, of course (and I hope this is keeping this Unixy enough),
|with that approach is that it won't handle headers split across
|multiple lines (I'm looking at you, Received:, but you aren't alone),
|and that it'll match lines in the body of the message as well (such as
|the "Subject: " line in the body of your message), unless the body
|happens to be e.g. Base64 encoded which instead complicates searching
|for non-header material.
|
|For sure neither is insurmountable even with standard tools, but it
|does require a bit more complexity than a simple egrep to properly
|parse even a single message, let alone a combination of multiple ones
|(as seen in mbox mailboxes, for example). At that point having
|specific tools, such as formail, that understand the specific file
|format does start to make sense...
|
|There isn't really much conceptual difference between writing, say,
| formail -X Subject: < message.txt
|and
| egrep "^Subject: " message.txt
|but the way the former handles certain edge cases is definitely better
|than that of the latter.
|
|Make everything as simple as possible, but not simpler. (That goes for
|web pages, too, by the way.)
|> But I do wish that TUHS stripped DKIM and associated headers of
|> messages going into the mailing list. By doing that, there would be
|> no data to compare to that wouldn't match.
|>
|> I think it would be even better if TUHS would DKIM sign messages as
|> they leave the mailing list's mail server.
|
|I believe the correct way would indeed be to validate, strip and
|possibly re-sign. That way, everyone would (should) be making correct
|claims about a message's origin.
DKIM reuses the *SSL key infrastructure, which is good. (Many eyes
see the code in question.) It places records in DNS, which is
also good, now that we have DNS over TCP/TLS and (likely) DTLS.
In practice however things may differ and to me DNS security is
all in all not given as long as we get to the transport layer
security.
I personally do not like DKIM still, i have opendkim around and
thought about it, but i do not use it, i would rather wish that
public TLS certificates could also be used in the same way as
public S/MIME certificates or OpenPGP public keys work, then only
by going to a TLS endpoint securely once, there could be
end-to-end security. I am not a cryptographer, however. (I also
have not read the TLS v1.3 standard which is about to become
reality.) The thing however is that for DKIM a lonesome user
cannot do anything -- you either need to have your own SMTP
server, or you need to trust your provider. But our own user
interface is completely detached. (I mean, at least if no MTA is
used one could do the DKIM stuff, too.)
|FWIW, SPF presents a similar problem with message forwarding without
|address rewriting... so it's definitely not just DKIM.
--End of <20180624100438.GY10129@h-174-65.A328.priv.bahnhof.se>
--steffen
|
|Der Kragenbaer, The moon bear,
|der holt sich munter he cheerfully and one by one
|einen nach dem anderen runter wa.ks himself off
|(By Robert Gernhardt)
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] off-topic list
2018-06-25 16:10 ` Steffen Nurpmeso
@ 2018-06-25 18:48 ` Grant Taylor via TUHS
0 siblings, 0 replies; 133+ messages in thread
From: Grant Taylor via TUHS @ 2018-06-25 18:48 UTC (permalink / raw)
To: tuhs
[-- Attachment #1: Type: text/plain, Size: 2820 bytes --]
On 06/25/2018 10:10 AM, Steffen Nurpmeso wrote:
> DKIM reuses the *SSL key infrastructure, which is good.
Are you saying that DKIM relies on the traditional PKI via CA
infrastructure? Or are you saying that it uses similar technology that
is completely independent of the PKI / CA infrastructure?
> (Many eyes see the code in question.) It places records in DNS, which
> is also good, now that we have DNS over TCP/TLS and (likely) DTLS.
> In practice however things may differ and to me DNS security is all in
> all not given as long as we get to the transport layer security.
I believe that a secure DNS /transport/ is not sufficient. Simply
security the communications channel does not mean that the entity on the
other end is not lying. Particularly when not talking to the
authoritative server, likely by relying on caching recursive resolvers.
> I personally do not like DKIM still, i have opendkim around and
> thought about it, but i do not use it, i would rather wish that public
> TLS certificates could also be used in the same way as public S/MIME
> certificates or OpenPGP public keys work, then only by going to a TLS
> endpoint securely once, there could be end-to-end security.
All S/MIME interactions that I've seen do use certificates from a well
know CA via the PKI.
(My understanding of) what you're describing is encryption of data in
flight. That does nothing to encrypt / protect data at rest.
S/MIME /does/ provide encryption / authentication of data in flight
/and/ data at rest.
S/MIME and PGP can also be used for things that never cross the wire.
> I am not a cryptographer, however. (I also have not read the TLS v1.3
> standard which is about to become reality.) The thing however is that
> for DKIM a lonesome user cannot do anything -- you either need to have
> your own SMTP server, or you need to trust your provider.
I don't think that's completely accurate. DKIM is a method of signing
(via cryptographic hash) headers as you see (send) them. I see no
reason why a client can't add DKIM headers / signature to messages it
sends to the MSA.
Granted, I've never seen this done. But I don't see anything preventing
it from being the case.
> But our own user interface is completely detached. (I mean, at least
> if no MTA is used one could do the DKIM stuff, too.)
I know that it is possible to do things on the receiving side. I've got
the DKIM Verifier add-on installed in Thunderbird, which gives me client
side UI indication if the message that's being displayed still passes
DKIM validation or not. The plugin actually calculates the DKIM hash
and compares it locally. It's not just relying on a header added by
receiving servers.
--
Grant. . . .
unix || die
[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 3982 bytes --]
^ permalink raw reply [flat|nested] 133+ messages in thread
* [TUHS] mail (Re: off-topic list
2018-06-24 0:18 ` Grant Taylor via TUHS
2018-06-24 10:04 ` Michael Kjörling
@ 2018-06-25 0:43 ` Bakul Shah
2018-06-25 1:15 ` Lyndon Nerenberg
2018-06-25 14:18 ` [TUHS] " Clem Cole
2018-06-25 15:51 ` [TUHS] off-topic list Steffen Nurpmeso
3 siblings, 1 reply; 133+ messages in thread
From: Bakul Shah @ 2018-06-25 0:43 UTC (permalink / raw)
To: TUHS main list
On Jun 23, 2018, at 5:18 PM, Grant Taylor via TUHS <tuhs@minnie.tuhs.org> wrote:
>
>> I actually have no idea of nmh, but i for one think the sentence of the old BSD Mail manual stating it is an "intelligent mail processing system, which has a command syntax reminiscent of ed(1) with lines replaced by messages" has always been a bit excessive. And the fork i maintain additionally said it "is also usable as a mail batch language, both for sending and receiving mail", adding onto that.
>
> What little I know about the MH type mail stores and associated utilities are indeed quite powerful. I think they operate under the premise that each message is it's own file and that you work in something akin to a shell if not your actual OS shell. I think the MH commands are quite literally unix command that can be called from the unix shell. I think this is in the spirit of simply enhancing the shell to seem as if it has email abilities via the MH commands. Use any traditional unix text processing utilities you want to manipulate email.
>
> MH has always been attractive to me, but I've never used it myself.
One of the reasons I continue using MH (now nmh) as my primary
email system is because it works so well with Unixy tools. I
can e.g. do
pick +TUHS -after '31 Dec 2017' -and -from bakul | xargs scan
To see summary lines of messages I posted to this group this
year. Using MH tools with non-MH tools is a bit clunky though
useful; I just have to cd to the right Mail directory first.
A mail message is really a structured object and if we
represent it as a directory, all the standard unix tools can
be used. This is what Presotto's upasfs on plan9 does. With it
you can do things like
cd /mail/fs/mbox
grep -l foo */from | sed 's/from/to/' | xargs grep -l bar
But you soon realize standard unix tools are quite a bit
clunky in dealing with structured objects - often you want to
deal with a set of sub components simultaneously (e.g. the
pick command above). This problem is actually pretty common in
the unix world and you have to have context/target specific
tools. At the same time a unix sensibility can guide the
architecture / design. Something like MH toolset would be
easier to implement if you assume an underlying mbox
filesystem. Conversely, the same sort of tools can perhaps be
useful elsewhere.
\vague-idea{
Seems to me that we have taken the unix (& plan9) model for
granted and not really explored it at this level much. That
is, can we map structured objects into trees/graphs in an
efficient way such that standard tools can be used on them?
Can we extend standard tools to explore substructures in a
more generic way?
}
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] mail (Re: off-topic list
2018-06-25 0:43 ` [TUHS] mail (Re: " Bakul Shah
@ 2018-06-25 1:15 ` Lyndon Nerenberg
2018-06-25 2:44 ` George Michaelson
` (2 more replies)
0 siblings, 3 replies; 133+ messages in thread
From: Lyndon Nerenberg @ 2018-06-25 1:15 UTC (permalink / raw)
To: Bakul Shah; +Cc: TUHS main list
> On Jun 24, 2018, at 5:43 PM, Bakul Shah <bakul@bitblocks.com> wrote:
>
> \vague-idea{
> Seems to me that we have taken the unix (& plan9) model for
> granted and not really explored it at this level much. That
> is, can we map structured objects into trees/graphs in an
> efficient way such that standard tools can be used on them?
> Can we extend standard tools to explore substructures in a
> more generic way?
> }
One of the main reasons I bailed out of nmh development was the disinterest in trying new and offside ways of addressing the mail store model. I have been an MH user since the mid-80s, and I still think it is more functional than any other MUA in use today (including Alpine). Header voyeurs will note I'm sending this from Mail.app. As was mentioned earlier in this conversation, there is no one MUA, just as there is no one text editing tool. I used a half dozen MUAs on a daily basis, depending on my needs.
But email, as a structured subset of text, deserves its own set of dedicated tools. formail and procmail are immediate examples. As MIME intruded on the traditional ASCII-only UNIX mbox format, grep and friends became less helpful. HTML just made it worse.
Plan9's upas/fs abstraction of the underlying mailbox helps a lot. By undoing all the MIME encoding, and translating the character sets to UTF8, grep and friends work again. That filesystem abstraction has so much more potential, too. E.g. pushing a pgpfs layer in there to transparently decode PGP content.
I really wish there was a way to bind Plan9-style file servers into the UNIX filename space. Matt Blaze's CFS is the closest example of this I have seen. Yet even it needs superuser privs. (Yes, there's FUSE, but it just re-invents the NFS interface with no added benefit.)
Meanwhile, I have been hacking on a version of nmh that uses a native UTF8 message store. I.e. it undoes all the MIME encoding, and undoes non-UTF8 charsets. So grep and friends work, once again.
--lyndon
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] mail (Re: off-topic list
2018-06-25 1:15 ` Lyndon Nerenberg
@ 2018-06-25 2:44 ` George Michaelson
2018-06-25 3:04 ` Larry McVoy
2018-06-25 3:15 ` Bakul Shah
2018-06-25 16:26 ` Steffen Nurpmeso
2 siblings, 1 reply; 133+ messages in thread
From: George Michaelson @ 2018-06-25 2:44 UTC (permalink / raw)
To: Lyndon Nerenberg; +Cc: TUHS main list
your email Lyndon, was uncannily like what I wanted to say. I think.
a) (n)MH is great
b) I can't live in one mail UI right now for a number of reasons.
Thats unfortunate
3) integration of MH into pop/imap was abortive and requires effort.
If thats improved, I'd love to know
4) we stopped advancing the art (tm) for handling data via pipes and
grep-like workflows
the plan9 plumber is amazing. I remain in awe of the work done to get
there, but its observable it isn't well applied anywhere else in the
ecology, that I can see. Which is odd, and .a shame given Pike works
in Google now. I guess people move into different roles (I know I
have)
If you put your version of NMH up anywhere I'd love to see it.
the one thing I do know, is that I have far more email accounts than I
should do (three, with two subvariants) and far more UI than I want
(four)
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] mail (Re: off-topic list
2018-06-25 2:44 ` George Michaelson
@ 2018-06-25 3:04 ` Larry McVoy
0 siblings, 0 replies; 133+ messages in thread
From: Larry McVoy @ 2018-06-25 3:04 UTC (permalink / raw)
To: George Michaelson; +Cc: TUHS main list
On Mon, Jun 25, 2018 at 12:44:15PM +1000, George Michaelson wrote:
> your email Lyndon, was uncannily like what I wanted to say. I think.
>
> a) (n)MH is great
> b) I can't live in one mail UI right now for a number of reasons.
> Thats unfortunate
> 3) integration of MH into pop/imap was abortive and requires effort.
> If thats improved, I'd love to know
I use rackspace for mcvoy.com but plain old mutt for reading it here
and sending. I use fetchmail to get the mail locally. Works for me
because mcvoy.com used to be a mail server and is still set up from
those times to send mail. Kind of hacky but rackspace does the spam
filtering and I got sick of babysitting that.
> 4) we stopped advancing the art (tm) for handling data via pipes and
> grep-like workflows
So the source management system I started, BitKeeper, has sort of an
answer for the processing question. It's a stretch but if you look
at it enough a mail message is a little like a commit, they both have
fields.
Below is an example of the little "language" we built for processing
deltas in a revision history graph. Some notes on how it works:
:SOMETHING: means take struct delta->SOMETHING and replace the
:SOMETHING: with that value.
Control statements begin with $, so $if (expr).
From awk we get $begin and $end (this whole language is very awk
like with what awk would consider a line we hand the language
a struct delta.)
We invented a $each(:MULTI_LINE_FILE:) {
:MULTI_LINE_FILE:
}
that is an interator, the variable in the body evaluates to
the next line of the multi line thing. Weird but it works.
$json(:FIELD:) json encodes the field.
"text" is just printed.
We gave you 10 registers/variables in $0 .. $9, they default to
false.
This little script is running through each commit and printing out the
information in json. Examples after the script.
It's important to understand that the $begin/end are run before/after
the deltas and then the main part of the script is run once per delta,
just like awk.
# dspec-v2
# The dspec used by 'bk changes -json'
$begin {
"[\n"
}
$if (:CHANGESET: && !:COMPONENT_V:) {
$if($0 -eq 1) {
"\},\n"
}
"\{\n"
" \"key\": \":MD5KEY:\",\n"
" \"user\": \":USER:\",\n"
" \"host\": \":HOST:\",\n"
" \"date\": \":Dy:-:Dm:-:Dd:T:T::TZ:\",\n"
" \"serial\": :DS:,\n"
" \"comments\": \"" $each(:C:){$json{(:C:)}\\n} "\",\n"
$if (:TAGS:) {
" \"tags\": [ "
$each(:TAGS:){:JOIN:"\""(:TAGS:)"\""}
" ],\n"
}
" \"parents\": [ "
$if(:PARENT:){"\"" :MD5KEY|PARENT: "\""}
$if(:PARENT: && :MPARENT:){," "}
$if(:MPARENT:){"\"" :MD5KEY|MPARENT: "\""}
" ]\n"
${0=1} # we need to close off the changeset
}
$end {
$if($0 -eq 1) {
"\}\n"
}
"]\n"
}
So here is human readable output:
$ bk changes -1
ChangeSet@1.2926, 2018-03-12 08:00:33-04:00, rob@bugs.(none)
L: Fix bug where "defaultx:" would be scanned as a T_DEFAULT
followed by a T_COLON. The "x" and anything else after
"default" and before the colon would be ignored.
So if you ever had an option name that began with "default",
it wouldn't be scanned correctly.
This bug was reported by user GNX on the BitKeeper user's forum
(Little language area).
And here is the same thing run through that scrip above.
$ bk changes -1 --json
[
{
"key": "5aa66be1MaS_1t5lQkNCflPexCwd2w",
"user": "rob",
"host": "bugs.(none)",
"date": "2018-03-12T08:00:33-04:00",
"serial": 11178,
"comments": "L: Fix bug where \"defaultx:\" would be scanned as a T_DEFAULT\nf
ollowed by a T_COLON. The \"x\" and anything else after\n\"default\" and before
the colon would be ignored.\n\nSo if you ever had an option name that began with
\"default\",\nit wouldn't be scanned correctly.\n\nThis bug was reported by use
r GNX on the BitKeeper user's forum\n(Little language area).\n",
"parents": [ "5a2d8748bf8TYIOquTa3CZInTjC7KQ" ]
}
]
If you have read this far, maybe some ideas from this stuff could be used
to get a sort of processing system for structured data. You'd need a plugin
for each structure but the harness itself could be reused.
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] mail (Re: off-topic list
2018-06-25 1:15 ` Lyndon Nerenberg
2018-06-25 2:44 ` George Michaelson
@ 2018-06-25 3:15 ` Bakul Shah
2018-06-25 16:26 ` Steffen Nurpmeso
2 siblings, 0 replies; 133+ messages in thread
From: Bakul Shah @ 2018-06-25 3:15 UTC (permalink / raw)
To: Lyndon Nerenberg; +Cc: TUHS main list
On Jun 24, 2018, at 6:15 PM, Lyndon Nerenberg <lyndon@orthanc.ca> wrote:
>
> But email, as a structured subset of text, deserves its own set of dedicated tools. formail and procmail are immediate examples. As MIME intruded on the traditional ASCII-only UNIX mbox format, grep and friends became less helpful. HTML just made it worse.
Storing messages in mbox format files is like keeping
tarfiles around. You wouldn't (usually) run grep on a
tarfile - you'd unpack it and then select files based
on extensions etc before running grep. In the same way
running grep on what should be a transport only mail
format doesn't make much sense.
> Meanwhile, I have been hacking on a version of nmh that uses a native UTF8 message store. I.e. it undoes all the MIME encoding, and undoes non-UTF8 charsets. So grep and friends work, once again.
Note that Mail.app already does this (more or less).
I have a paper design for (n)mh<->imap connection &
associated command changes. I didn't want to touch nmh
code so I have been doing some prototyping in Go but
any project progress is in fits and starts....
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] mail (Re: off-topic list
2018-06-25 1:15 ` Lyndon Nerenberg
2018-06-25 2:44 ` George Michaelson
2018-06-25 3:15 ` Bakul Shah
@ 2018-06-25 16:26 ` Steffen Nurpmeso
2018-06-25 18:59 ` Grant Taylor via TUHS
2 siblings, 1 reply; 133+ messages in thread
From: Steffen Nurpmeso @ 2018-06-25 16:26 UTC (permalink / raw)
To: TUHS main list
Lyndon Nerenberg wrote in <42849F79-D132-4059-8A94-FFF8B141C49E@orthanc.ca>:
|> On Jun 24, 2018, at 5:43 PM, Bakul Shah <bakul@bitblocks.com> wrote:
|>
|> \vague-idea{
|> Seems to me that we have taken the unix (& plan9) model for
|> granted and not really explored it at this level much. That
|> is, can we map structured objects into trees/graphs in an
|> efficient way such that standard tools can be used on them?
|> Can we extend standard tools to explore substructures in a
|> more generic way?
|>}
|
|One of the main reasons I bailed out of nmh development was the disinter\
|est in trying new and offside ways of addressing the mail store model. \
| I have been an MH user since the mid-80s, and I still think it is \
|more functional than any other MUA in use today (including Alpine). \
| Header voyeurs will note I'm sending this from Mail.app. As was mentio\
|ned earlier in this conversation, there is no one MUA, just as there \
|is no one text editing tool. I used a half dozen MUAs on a daily basis, \
|depending on my needs.
|
|But email, as a structured subset of text, deserves its own set of \
|dedicated tools. formail and procmail are immediate examples. As \
|MIME intruded on the traditional ASCII-only UNIX mbox format, grep \
|and friends became less helpful. HTML just made it worse.
|
|Plan9's upas/fs abstraction of the underlying mailbox helps a lot. \
| By undoing all the MIME encoding, and translating the character sets \
|to UTF8, grep and friends work again. That filesystem abstraction \
|has so much more potential, too. E.g. pushing a pgpfs layer in there \
|to transparently decode PGP content.
|
|I really wish there was a way to bind Plan9-style file servers into \
|the UNIX filename space. Matt Blaze's CFS is the closest example of \
|this I have seen. Yet even it needs superuser privs. (Yes, there's \
|FUSE, but it just re-invents the NFS interface with no added benefit.)
|
|Meanwhile, I have been hacking on a version of nmh that uses a native \
|UTF8 message store. I.e. it undoes all the MIME encoding, and undoes \
|non-UTF8 charsets. So grep and friends work, once again.
The SMTPUTF8 standard can this, and nmh now too. There was
a conference presentation of i think a Microsoft employee that was
linked on one of the lists i track. He also said this, "finally
being able to just look at the mail as it is (again)". He showed
one in an editor too. But the presentation could be seen as the
whale of a lie, as propaganda, there was no traceroute, no
signatures, no attachments. Cool and so, but not very realistic.
I do not want to look at the plain source of a mail message in
almost all cases. It is otherwise mostly for development.
Therefore i am having trouble to understand all this, i mean,
i need a PDFviewer to look into a PDF, [..etc...], just the same
is true for email. Maybe because i like MBOX a lot, it is just
a database, a single file, that i can easily take and move.
I have never understood why people get excited about Maildir, you
have trees full of files with names which hurt the eyes, and they
miss a From_ line (and are thus not valid MBOX files, i did not
miss that in the other mail).
Despite that, having a possibility to look easily i miss much.
--steffen
|
|Der Kragenbaer, The moon bear,
|der holt sich munter he cheerfully and one by one
|einen nach dem anderen runter wa.ks himself off
|(By Robert Gernhardt)
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] mail (Re: off-topic list
2018-06-25 16:26 ` Steffen Nurpmeso
@ 2018-06-25 18:59 ` Grant Taylor via TUHS
0 siblings, 0 replies; 133+ messages in thread
From: Grant Taylor via TUHS @ 2018-06-25 18:59 UTC (permalink / raw)
To: tuhs
[-- Attachment #1: Type: text/plain, Size: 1950 bytes --]
On 06/25/2018 10:26 AM, Steffen Nurpmeso wrote:
> I have never understood why people get excited about Maildir, you have
> trees full of files with names which hurt the eyes,
First, a single Maildir is a parent directory and three sub-directories.
Many Maildir based message stores are collections of multiple
individual Maildirs.
Second, the names may look ugly, but they are named independently of the
contents of the message.
Third, I prefer the file per message as opposed to monolithic mbox for
performance reasons. Thus I message corruption impacts a single message
and not the entire mail folder (mbox).
Aside: I already have a good fast database that most people call a file
system and it does a good job tracking metadata for me.
Fourth, I found maildir to be faster on older / slower servers because
it doesn't require copying (backing up) the monolithic mbox file prior
to performing an operation on it. It splits reads & writes into much
smaller chunks that are more friendly (and faster) to the I/O sub-system.
Could many of the same things be said about MH? Very likely. What I
couldn't say about MH at the time I went looking and comparing (typical)
unix mail stores was the readily available POP3 and IMAP interface.
Seeing as how POP3 and IMAP were a hard requirement for me, MH was a
non-starter.
> they miss a From_ line (and are thus not valid MBOX files, i did not
> miss that in the other mail).
True. Though I've not found that to be a limitation. I'm fairly
confident that the Return-Path: header that is added / replaced by my
MTA does functionally the same thing.
I'm sure similar differences can be had between many different solutions
in Unix's history. SYS V style init.d scripts vs BSD style /etc/rc\d
style init scripts vs systemd or Solaris's SMD (I think that's it's name).
To each his / her own.
--
Grant. . . .
unix || die
[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 3982 bytes --]
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] off-topic list
2018-06-24 0:18 ` Grant Taylor via TUHS
2018-06-24 10:04 ` Michael Kjörling
2018-06-25 0:43 ` [TUHS] mail (Re: " Bakul Shah
@ 2018-06-25 14:18 ` Clem Cole
2018-06-25 15:28 ` [TUHS] off-topic list [ really mh ] Jon Steinhart
2018-06-25 15:51 ` [TUHS] off-topic list Steffen Nurpmeso
3 siblings, 1 reply; 133+ messages in thread
From: Clem Cole @ 2018-06-25 14:18 UTC (permalink / raw)
To: Grant Taylor; +Cc: TUHS main list
[-- Attachment #1: Type: text/plain, Size: 2742 bytes --]
On Sat, Jun 23, 2018 at 8:18 PM, Grant Taylor via TUHS <tuhs@minnie.tuhs.org
> wrote:
>
>
> What little I know about the MH type mail stores and associated utilities
> are indeed quite powerful.
Yep, their power and flaw all rolled together actually. Until I had
Pachyderm on Tru64/Alpha with AltaVista under the covers (which was gmail's
predecessor), I ran a flavor of MH from the time Bruce (Borden - MH's
author) first released it on the 6th edition on the Rand USENIX tape.
I'm going to guess for about 25 years. Although for the last 8-10 years,
I ran a post processor user interface called 'HM' (also from Rand) that was
curses based that split the screen into two.
> I think they operate under the premise that each message is it's own
> file
Correct - which is great, other than on small systems it chews up inodes
and disk space which for v6 and v7 could be a problem. But it means
everything was always ASCII and easy to grok and any tool from an editor to
macro processor could be inserted. It also meant that unlike AT&T "mail",
the division between the MUA and the MTA was first declared by the Rand and
understood in Unix and used in the original UofI ArpaNet code (before
Kurt's delivermail [sendmail's predecessor] which was part of UCB Mail, or
the MIT mailer ArpaNet hacks that would come later).
BTW: I may have the the original Rand MH release somewhere. We ran it at
Tektronix on V6 on the 11/60 and then V7 on the TekLabs 11/70, as I brought
it with me. We hacked the MTA portion to talk smtpd under Bruce's UNET
code to our VMS/SMTPD at some point.
> and that you work in something akin to a shell if not your actual OS shell.
Exactly. Your shell or emacs if you so desired - whatever your native
system interface was. HM took the idea a little further to make things
more screen oriented and later versions of MH picked some of the HM stuff
I'm told; but I had started to use Pachyderm - which was search based.
> I think the MH commands are quite literally unix command that can be
> called from the unix shell. I think this is in the spirit of simply
> enhancing the shell to seem as if it has email abilities via the MH
> commands. Use any traditional unix text processing utilities you want to
> manipulate email.
Absolutely. I do find myself, pulling things out of gmail, sometimes so I
can do Unix tricks to inbound mail that gmail will not let me do. And
when I want to do anything really automating on the send side, I have MH
installed and it calls the local MTA. But I admit, the indexing that
search gives you is incredibly powerful for day to day use and I could not
go back. to MH.
[-- Attachment #2: Type: text/html, Size: 4475 bytes --]
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] off-topic list [ really mh ]
2018-06-25 14:18 ` [TUHS] " Clem Cole
@ 2018-06-25 15:28 ` Jon Steinhart
2018-06-26 7:49 ` Ralph Corderoy
0 siblings, 1 reply; 133+ messages in thread
From: Jon Steinhart @ 2018-06-25 15:28 UTC (permalink / raw)
To: TUHS main list
I'm a big fan of mh (nmh now) and a sometimes maintainer.
What I love about it is that it's a separate set of commands as
opposed to an integrated blob. That means that I can script.
I also love the one-file-per-message thing, although it's somewhat
less useful with mime; can't just grep anymore.
One of the features that added to nmh to allow it to invoke external
programs when doing its own message processing. I use this to maintain
a parallel ElasticSearch database of my mail; just a simple word to
message folder/file thing. I actually go through the trouble of
decoding many mime times when building this database. This gives me
the ability to very quickly locate messages based on their contents.
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] off-topic list [ really mh ]
2018-06-25 15:28 ` [TUHS] off-topic list [ really mh ] Jon Steinhart
@ 2018-06-26 7:49 ` Ralph Corderoy
0 siblings, 0 replies; 133+ messages in thread
From: Ralph Corderoy @ 2018-06-26 7:49 UTC (permalink / raw)
To: tuhs
Hi Jon,
> I'm a big fan of mh (nmh now) and a sometimes maintainer.
Ditto × 2. :-) https://www.nongnu.org/nmh/
> What I love about it is that it's a separate set of commands as
> opposed to an integrated blob. That means that I can script.
Yes, this is key. I don't procmail(1) my emails into folders for later
reading by theme, instead I have a shell script that runs through the
inbox looking for emails to process.
pick(1) finds things of interest, e.g.
pick --list-id '<tuhs\.minnie\.tuhs\.org>'
I then display those emails in a variety of ways:
one-line summary of each;
scrape and summarise signal from third-party noise with sed(1),
etc., having decoded the MIME;
read each in full in less(1) having configured `K', `S', ... to
update nmh's sequences called `keep', `spam', ..., and move onto
the next email.
And finally have read(1) prompt me for an action with a common default,
delete, refile, etc.
Then the script does it all again for the pick from the inbox.
The result is I'm lead through all the routine processing, mainly
hitting Enter with the odd bit of `DDDKSD'. A bit similiar to using the
rn(1) Usenet reader's `do the next thing' `Space' key.
In interactive use, old-MH hands don't type `pick -subject ...', but
instead have a personal ~/bin/* that suits their needs. For me, `s'
shows an email, `sc' gives a scan(1) listing, one per line, of the end
of the current folder; just enough based on `tput lines'. I search
with ~/bin/-sub, and so on.
Much more flexible than a silo like mail(1) by benefiting from the
`programming tools' model.
--
Cheers, Ralph.
https://plus.google.com/+RalphCorderoy
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] off-topic list
2018-06-24 0:18 ` Grant Taylor via TUHS
` (2 preceding siblings ...)
2018-06-25 14:18 ` [TUHS] " Clem Cole
@ 2018-06-25 15:51 ` Steffen Nurpmeso
2018-06-25 18:21 ` Grant Taylor via TUHS
3 siblings, 1 reply; 133+ messages in thread
From: Steffen Nurpmeso @ 2018-06-25 15:51 UTC (permalink / raw)
To: Grant Taylor via TUHS
Grant Taylor via TUHS wrote in <09ee8833-c8c0-8911-751c-906b737209b7@spa\
mtrap.tnetconsulting.net>:
|On 06/23/2018 04:38 PM, Steffen Nurpmeso wrote:
|> Absolutely true. And hoping that, different to web browsers, no let me
|> call it pseudo feature race is started that results in less diversity
|> instead of anything else.
|
|I'm not sure I follow. I feel like we do have the choice of MUAs or web
|browsers. Sadly some choices are lacking compared to other choices.
|IMHO the maturity of some choices and lack there of in other choices
|does not mean that we don't have choices.
Interesting. I do not see a real choice for me. Netsurf may be
one but cannot do Javascript once i looked last, so this will not
work out for many, and increasing. Just an hour ago i had a fight
with Chromium on my "secure box" i use for the stuff which knows
about bank accounts and passwords in general, and (beside the fact
it crashes when trying to prepare PDF printouts) it is just
unusable slow. I fail to understand (well actually i fail to
understand a lot of things but) why you need a GPU process for
a 2-D HTML page. And then this:
libGL error: unable to load driver: swrast_dri.so
libGL error: failed to load driver: swrast
[5238:5251:0625/143535.184020:ERROR:bus.cc(394)] Failed to connect to the bus: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory
[5238:5279:0625/143536.401172:ERROR:bus.cc(394)] Failed to connect to the bus: Could not parse server address: Unknown address type (examples of valid types are "tcp" and on UNIX "unix")
libGL error: unable to load driver: swrast_dri.so
libGL error: failed to load driver: swrast
[5293:5293:0625/143541.299106:ERROR:gl_implementation.cc(292)] Failed to load /usr/lib/chromium/swiftshader/libGLESv2.so: Error loading shared library /usr/lib/chromium/swiftshader/libGLESv2.so: No such file or directory
[5293:5293:0625/143541.404733:ERROR:viz_main_impl.cc(196)] Exiting GPU process due to errors during initialization
[..repeating..]
[5238:5269:0625/143544.747819:ERROR:browser_gpu_channel_host_factory.cc(121)] Failed to launch GPU process.
[5238:5238:0625/143544.749814:ERROR:gpu_process_transport_factory.cc(1009)] Lost UI shared context.
I mean, it is not there, ok?
I cannot really imagine how hard it is to write a modern web
browser, with the highly integrative DOM tree, CSS and Javascript
and such, however, and the devil is in the details anyway.
I realize that Chromium does not seem to offer options for Cookies
and such - i have a different elder browser for that.
All of that is off-topic anyway, but i do not know why you say
options.
|> If there is the freedom for a decision. That is how it goes, yes. For
..
|> But it is also nice to see that there are standards which were not fully
|> thought through and required backward incompatible hacks to fill \
|> the gaps.
|
|Why is that a "nice" thing?
Good question. Then again, young men (and women) need to have
a chance to do anything at all. Practically speaking. For
example we see almost thousands of new RFCs per year. That is
more than in the first about two decades all in all. And all is
getting better.
|> Others like DNS are pretty perfect and scale fantastic. Thus.
|
|Yet I frequently see DNS problems for various reasons. Not the least of
|which is that many clients do not gracefully fall back to the secondary
|DNS server when the primary is unavailable.
Really? Not that i know of. Resolvers should be capable to
provide quality of service, if multiple name servers are known,
i would say. This is even RFC 1034 as i see, SLIST and SBELT,
whereas the latter i filled in from "nameserver" as of
/etc/resolv.conf, it should have multiple, then. (Or point to
localhost and have a true local resolver or something like
dnsmasq.)
I do see DNS failures via Wifi but that is not the fault of DNS,
but of the provider i use.
P.S.: actually the only three things i have ever hated about DNS,
and i came to that in 2004 with EDNS etc. all around yet, is that
backward compatibly has been chosen for domain names, and
therefore we gained IDNA, which is a terribly complicated
brainfuck thing that actually caused incompatibilities, but these
where then waved through and ok. That is absurd.
(If UTF-8 is too lengthy, i would have used UTF-7 for this, the
average octet/character ratio is still enough for most things on
the internet. An EDNS bit could have been used for extended
domain name/label lengths, and if that would have been done 25
years ago we were fine out in practice. Until then registrars and
administrators had to decide whether they want to use extended
names or not. And labels of 25 characters are not common even
today, nowhere i ever looked.)
And the third is DNSSEC, which i read the standard of and said
"no". Just last year or the year before that we finally got DNS
via TCP/TLS and DNS via DTLS, that is, normal transport security!
Twenty years too late, but i really had good days when i saw those
standards flying by! Now all we need are zone administrators
which publish certificates via DNS and DTLS and TCP/TLS consumers
which can integrate those in their own local pool (for at least
their runtime).
|> Ah, it has become a real pain. It is ever so astounding how much HTML5
|> specifics, scripting and third party analytics can be involved for \
|> a page
|> that would have looked better twenty years ago, or say whenever CSS 2.0
|> came around.
|
|I'm going to disagree with you there. IMHO the standard is completely
|separate with what people do with it.
They have used <object> back in the 90s, an equivalent should have
made it into the standard back then. I mean this direction was
clear, right?, maybe then the right people would have more
influence still. Not that it would matter, all those many people
from the industry will always sail sair own regatta and use what
is hip.
|> Today almost each and every commercial site i visit is in practice
|> a denial of service attack. For me and my little box at least, and
|> without gaining any benefit from that!
|
|I believe those webmasters have made MANY TERRIBLE choices and have
|ended up with the bloat that they now have. - I do not blame the
|technology. I blame what the webmasters have done with the technology.
|
|Too many people do not know what they are doing and load "yet another
|module" to do something that they can already do with what's already
|loaded on their page. But they don't know that. They just glue things
|togehter until they work. Even if that means that they are loading the
|same thing multiple times because multiple of their 3rd party components
|loads a common module themselves. This is particularly pernicious with
|JavaScript.
Absolutely. And i am watching car industry as they present
themselves in Germany pretty closely, and they are all a nuisance,
in how they seem to track each other and step onto their
footsteps. We had those all-script-based slide effects, and they
all need high definition images and need to load them all at once,
non-progressively. It becomes absurd if you need to download 45
megabytes of data, and >43 of those are an image of a car in
nature with a very clean model wearing an axe. This is "cool"
only if you have a modern environment and the data locally
available. Im my humble opinion.
...
|> I actually have no idea of nmh, but i for one think the sentence of
|> the old BSD Mail manual stating it is an "intelligent mail processing
|> system, which has a command syntax reminiscent of ed(1) with lines
|> replaced by messages" has always been a bit excessive. And the fork i
|> maintain additionally said it "is also usable as a mail batch language,
|> both for sending and receiving mail", adding onto that.
|
|What little I know about the MH type mail stores and associated
|utilities are indeed quite powerful. I think they operate under the
|premise that each message is it's own file and that you work in
|something akin to a shell if not your actual OS shell. I think the MH
|commands are quite literally unix command that can be called from the
|unix shell. I think this is in the spirit of simply enhancing the shell
|to seem as if it has email abilities via the MH commands. Use any
|traditional unix text processing utilities you want to manipulate email.
|
|MH has always been attractive to me, but I've never used it myself.
I actually hate the concept very, very much ^_^, for me it has
similarities with brainfuck. I could not use it.
|> You say it. In the research Unix nupas mail (as i know it from Plan9)
|> all those things could have been done with a shell script and standard
|> tools like grep(1) and such.
|
|I do say that and I do use grep, sed, awk, formail, procmail, cp, mv,
|and any number of traditional unix file / text manipulation utilities on
|my email weekly. I do this both with the Maildir (which is quite
|similar to MH) on my mail server and to the Thumderbird message store
|that is itself a variant of Maildir with a file per message in a
|standard directory structure.
|
|> Even Thunderbird would simply be a maybe even little nice graphical
|> application for display purposes.
|
|The way that I use Thunderbird, that's exactly what it is. A friendly
|and convenient GUI front end to access my email.
|
|> The actual understanding of storage format and email standards would
|> lie solely within the provider of the file system.
|
|The emails are stored in discrete files that themselves are effectively
|mbox files with one message therein.
|
|> Now you use several programs which all ship with all the knowledge.
|
|I suppose if you count greping for a line in a text file as knowledge of
|the format, okay.
|
|egrep "^Subject: " message.txt
|
|There's nothing special about that. It's a text file with a line that
|looks like this:
|
|Subject: Re: [TUHS] off-topic list
Except that this will work only for all-english, as otherwise
character sets come into play, text may be in a different
character set, mail standards may impose a content-transfer
encoding, and then what you are looking at is actually a database,
not the data as such.
This is what i find so impressive about that Plan9 approach, where
the individual subparts of the message are available as files in
the filesystem, subjects etc. as such, decoded and available as
normal files. I think this really is .. impressive.
|> I for one just want my thing to be easy and reliable controllable via
|> a shell script.
|
|That's a laudable goal. I think MH is very conducive to doing that.
Maybe.
|> You could replace procmail (which is i think perl and needs quite some
|> perl modules) with a monolithic possibly statically linked C program.
|
|I'm about 95% certain that procmail is it's own monolithic C program.
|I've never heard of any reference to Perl in association with procmail.
|Are you perhaps thinking of a different local delivery agent?
Oh really? Then likely so, yes. I have never used this.
|> Then. With full error checking etc. This is a long road ahead, for
|> my thing.
|
|Good luck to you.
Thanks, eh, thanks. Time will bring.. or not.
|> So ok, it does not, actually. It chooses your "Grant Taylor via TUHS"
|> which ships with the TUHS address, so one may even see this as an
|> improvement to DMARC-less list replies, which would go to TUHS, with or
|> without the "The Unix Heritage Society".
|
|Please understand, that's not how I send the emails. I send them with
|my name and my email address. The TUHS mailing list modifies them.
|
|Aside: I support the modification that it is making.
It is of course not the email that leaves you no more. It is not
just headers are added to bring the traceroute path. I do have
a bad feeling with these, but technically i do not seem to have an
opinion.
|> I am maybe irritated by the 'dkim=fail reason="signature verification
|> failed"' your messages produce. It would not be good to filter out
|> failing DKIMs, at least on TUHS.
|
|Okay. That is /an/ issue. But I believe it's not /my/ issue to solve.
|
|My server DKIM signs messages that it sends out. From everything that
|I've seen and tested (and I actively look for problems) the DKIM
|signatures are valid and perfectly fine.
|
|That being said, the TUHS mailing list modifies message in the following
|ways:
|
|1) Modifies the From: when the sending domain uses DMARC.
|2) Modifies the Subject to prepend "[TUHS] ".
|3) Modifies the body to append a footer.
|
|All three of these actions modify the data that receiving DKIM filters
|calculate hashes based on. Since the data changed, obviously the hash
|will be different.
|
|I do not fault THUS for this.
|
|But I do wish that TUHS stripped DKIM and associated headers of messages
|going into the mailing list. By doing that, there would be no data to
|compare to that wouldn't match.
|
|I think it would be even better if TUHS would DKIM sign messages as they
|leave the mailing list's mail server.
Well, this adds the burden onto TUHS. Just like i have said.. but
you know, more and more SMTP servers connect directly via STARTSSL
or TCP/TLS right away. TUHS postfix server does not seem to do so
on the sending side -- you know, i am not an administor, no
earlier but on the 15th of March this year i realized that my
Postfix did not reiterate all the smtpd_* variables as smtp_*
ones, resulting in my outgoing client connections to have an
entirely different configuration than what i provided for what
i thought is "the server", then i did, among others
smtpd_tls_security_level = may
+smtp_tls_security_level = $smtpd_tls_security_level
But if TUHS did, why should it create a DKIM signature?
Ongoing is the effort to ensure SMTP uses TLS all along the route,
i seem to recall i have seen RFCs pass by which accomplish that.
Or only drafts?? Hmmm.
...
|I don't know if PGP or S/MIME will ever mandate anything about headers
|which are structurally outside of their domain.
|
|I would like to see an option in MUAs that support encrypted email for
|something like the following:
|
| Subject: (Subject in encrypted body.)
|
|Where the encrypted body included a header like the following:
|
| Encrypted-Subject: Re: [TUHS] off-topic list
|
|I think that MUAs could then display the subject that was decrypted out
|of the encrypted body.
Well S/MIME does indeed specify this mode of encapsulating the
entire message including the headers, and enforce MUAs to
completely ignore the outer envelope in this case. (With a RFC
discussing problems of this approach.) The BSD Mail clone
i maintain does not support this yet, along with other important
aspects of S/MIME, like the possibility to "self-encrypt" (so that
the message can be read again, a.k.a. that the encrypted version
lands on disk in a record, not the plaintext one). I hope it will
be part of the OpenPGP, actually privacy rewrite this summer.
--steffen
|
|Der Kragenbaer, The moon bear,
|der holt sich munter he cheerfully and one by one
|einen nach dem anderen runter wa.ks himself off
|(By Robert Gernhardt)
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] off-topic list
2018-06-25 15:51 ` [TUHS] off-topic list Steffen Nurpmeso
@ 2018-06-25 18:21 ` Grant Taylor via TUHS
2018-06-26 20:38 ` Steffen Nurpmeso
0 siblings, 1 reply; 133+ messages in thread
From: Grant Taylor via TUHS @ 2018-06-25 18:21 UTC (permalink / raw)
To: tuhs
[-- Attachment #1: Type: text/plain, Size: 8873 bytes --]
On 06/25/2018 09:51 AM, Steffen Nurpmeso wrote:
> I cannot really imagine how hard it is to write a modern web browser,
> with the highly integrative DOM tree, CSS and Javascript and such,
> however, and the devil is in the details anyway.
Apparently it's difficult. Or people aren't sufficiently motivated.
I know that I personally can't do it.
> Good question. Then again, young men (and women) need to have a chance
> to do anything at all. Practically speaking. For example we see almost
> thousands of new RFCs per year. That is more than in the first about
> two decades all in all. And all is getting better.
Yep. I used to have aspirations of reading RFCs. But work and life
happened, then the rate of RFC publishing exploded, and I gave up.
> Really? Not that i know of. Resolvers should be capable to provide
> quality of service, if multiple name servers are known, i would say.
I typically see this from the authoritative server side. I've
experienced this (at least) once myself and have multiple colleagues
that have also experienced this (at least) once themselves too.
If the primary name server is offline for some reason (maintenance,
outage, hardware, what have you) clients start reporting a problem that
manifests itself as a complete failure. They seem to not fall back to
the secondary name server.
I'm not talking about "slowdown" complaints. I'm talking about
"complete failure" and "inability to browse the web".
I have no idea why this is. All the testing that I've done indicate
that clients fall back to secondary name servers. Yet multiple
colleagues and myself have experienced this across multiple platforms.
It's because of these failure symptoms that I'm currently working with a
friend & colleague to implement VRRP between his name servers so that
either of them can serve traffic for both DNS service IPs / Virtual IPs
(VIPs) in the event that the other server is unable to do so.
We all agree that a 5 ~ 90 second (depending on config) outage with
automatic service continuation after VIP migration is a LOT better than
end users experiencing what are effectively service outages of the
primary DNS server.
Even in the outages, the number of queries to the secondary DNS
server(s) don't increase like you would expect as clients migrate from
the offline primary to the online secondary.
In short, this is a big unknown that I, and multiple colleagues, have
seen and can't explain. So, we have each independently decided to
implement solutions to keep the primary DNS IP address online using what
ever method we prefer.
> This is even RFC 1034 as i see, SLIST and SBELT, whereas the latter
> i filled in from "nameserver" as of /etc/resolv.conf, it should have
> multiple, then. (Or point to localhost and have a true local resolver
> or something like dnsmasq.)
I completely agree that per all documentation, things SHOULD work with
just the secondary. Yet my experience from recursive DNS server
operator stand point, things don't work.
> I do see DNS failures via Wifi but that is not the fault of DNS, but of
> the provider i use.
Sure. That's the nature of a wireless network. It's also unrelated to
the symptoms I describe above.
> P.S.: actually the only three things i have ever hated about DNS, and
> i came to that in 2004 with EDNS etc. all around yet, is that backward
> compatibly has been chosen for domain names, and therefore we gained
> IDNA, which is a terribly complicated brainfuck thing that actually
> caused incompatibilities, but these where then waved through and ok.
> That is absurd.
I try to avoid dealing with IDNs. Fortunately I'm fairly lucky in doing so.
> And the third is DNSSEC, which i read the standard of and said "no".
> Just last year or the year before that we finally got DNS via TCP/TLS
> and DNS via DTLS, that is, normal transport security!
My understanding is that DNSSEC provides verifiable authenticity of the
information transported by DNSSEC. It's also my understanding that
DTLS, DOH, DNScrypt, etc provide DNS /transport/ encryption
(authentication & privacy).
As I understand it, there is nothing about DTLS, DOH, DNScrypt, etc that
prevent servers on the far end of said transports from modifying queries
/ responses that pass through them, or serving up spoofed content.
To me, DNSSEC serves a completely different need than DTLS, DOH,
DNScrypt, etc.
Please correct me and enlighten me if I'm wrong or have oversimplified
things.
> Twenty years too late, but i really had good days when i saw those
> standards flying by! Now all we need are zone administrators which
> publish certificates via DNS and DTLS and TCP/TLS consumers which can
> integrate those in their own local pool (for at least their runtime).
DANE has been waiting on that for a while.
I think DANE does require DNSSEC. I think DKIM does too.
> I actually hate the concept very, very much ^_^, for me it has
> similarities with brainfuck. I could not use it.
Okay. To each his / her own.
> Except that this will work only for all-english, as otherwise character
> sets come into play, text may be in a different character set, mail
> standards may impose a content-transfer encoding, and then what you are
> looking at is actually a database, not the data as such.
Hum. I've not considered non-English as I so rarely see it in raw email
format.
> This is what i find so impressive about that Plan9 approach, where
> the individual subparts of the message are available as files in the
> filesystem, subjects etc. as such, decoded and available as normal files.
> I think this really is .. impressive.
I've never had the pleasure of messing with Plan9. It's on my
proverbial To-Do-Someday list. It does sound very interesting.
> Thanks, eh, thanks. Time will bring.. or not.
;-)
> It is of course not the email that leaves you no more. It is not just
> headers are added to bring the traceroute path. I do have a bad feeling
> with these, but technically i do not seem to have an opinion.
I too have some unease with things while not having any technical
objection to them.
> Well, this adds the burden onto TUHS. Just like i have said.. but
> you know, more and more SMTP servers connect directly via STARTSSL or
> TCP/TLS right away. TUHS postfix server does not seem to do so on the
> sending side
The nature of email is (and has been) changing.
We're no longer trying to use 486s to manage mainstream email. My
opinion is that we have the CPU cycles to support current standards.
> -- you know, i am not an administor, no earlier but on the 15th of March
> this year i realized that my Postfix did not reiterate all the smtpd_*
> variables as smtp_* ones, resulting in my outgoing client connections
> to have an entirely different configuration than what i provided for
> what i thought is "the server", then i did, among others
>
> smtpd_tls_security_level = may +smtp_tls_security_level =
> $smtpd_tls_security_level
Oy vey.
> But if TUHS did, why should it create a DKIM signature? Ongoing is the
> effort to ensure SMTP uses TLS all along the route, i seem to recall i
> have seen RFCs pass by which accomplish that. Or only drafts?? Hmmm.
SMTP over TLS (STARTTLS) is just a transport. I can send anything I
want across said transport.
DKIM is about enabling downstream authentication in email. Much like
DNSSEC does for DNS.
There is nothing that prevents sending false information down a secure
communications channel.
> Well S/MIME does indeed specify this mode of encapsulating the entire
> message including the headers, and enforce MUAs to completely ignore the
> outer envelope in this case. (With a RFC discussing problems of this
> approach.)
Hum. That's contrary to my understanding. Do you happen to have RFC
and section numbers handy? I've wanted to, needed to, go read more on
S/MIME. It now sounds like I'm missing something.
I wonder if the same can be said about PGP.
> The BSD Mail clone i maintain does not support this yet, along with other
> important aspects of S/MIME, like the possibility to "self-encrypt" (so
> that the message can be read again, a.k.a. that the encrypted version
> lands on disk in a record, not the plaintext one). I hope it will be
> part of the OpenPGP, actually privacy rewrite this summer.
I thought that many MUAs handled that problem by adding the sender as an
additional recipient in the S/MIME structure. That way the sender could
extract the ephemeral key using their private key and decrypt the
encrypted message that they sent.
--
Grant. . . .
unix || die
[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 3982 bytes --]
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] off-topic list
2018-06-25 18:21 ` Grant Taylor via TUHS
@ 2018-06-26 20:38 ` Steffen Nurpmeso
0 siblings, 0 replies; 133+ messages in thread
From: Steffen Nurpmeso @ 2018-06-26 20:38 UTC (permalink / raw)
To: Grant Taylor via TUHS
Hello. And sorry for the late reply, "lovely weather we're
having"..
Grant Taylor via TUHS wrote in <c4debab5-181d-c071-850a-a8865d6e98d0@spa\
mtrap.tnetconsulting.net>:
|On 06/25/2018 09:51 AM, Steffen Nurpmeso wrote:
...
|> I cannot really imagine how hard it is to write a modern web browser,
|> with the highly integrative DOM tree, CSS and Javascript and such,
|> however, and the devil is in the details anyway.
|
|Apparently it's difficult. Or people aren't sufficiently motivated.
|
|I know that I personally can't do it.
|
|> Good question. Then again, young men (and women) need to have a chance
|> to do anything at all. Practically speaking. For example we see almost
|> thousands of new RFCs per year. That is more than in the first about
|> two decades all in all. And all is getting better.
|
|Yep. I used to have aspirations of reading RFCs. But work and life
|happened, then the rate of RFC publishing exploded, and I gave up.
I think it is something comparable.
|> Really? Not that i know of. Resolvers should be capable to provide
|> quality of service, if multiple name servers are known, i would say.
|
|I typically see this from the authoritative server side. I've
|experienced this (at least) once myself and have multiple colleagues
|that have also experienced this (at least) once themselves too.
|
|If the primary name server is offline for some reason (maintenance,
|outage, hardware, what have you) clients start reporting a problem that
|manifests itself as a complete failure. They seem to not fall back to
|the secondary name server.
|
|I'm not talking about "slowdown" complaints. I'm talking about
|"complete failure" and "inability to browse the web".
|
|I have no idea why this is. All the testing that I've done indicate
|that clients fall back to secondary name servers. Yet multiple
|colleagues and myself have experienced this across multiple platforms.
|
|It's because of these failure symptoms that I'm currently working with a
|friend & colleague to implement VRRP between his name servers so that
|either of them can serve traffic for both DNS service IPs / Virtual IPs
|(VIPs) in the event that the other server is unable to do so.
|
|We all agree that a 5 ~ 90 second (depending on config) outage with
|automatic service continuation after VIP migration is a LOT better than
|end users experiencing what are effectively service outages of the
|primary DNS server.
|
|Even in the outages, the number of queries to the secondary DNS
|server(s) don't increase like you would expect as clients migrate from
|the offline primary to the online secondary.
|
|In short, this is a big unknown that I, and multiple colleagues, have
|seen and can't explain. So, we have each independently decided to
|implement solutions to keep the primary DNS IP address online using what
|ever method we prefer.
That is beyond me, i have never dealt with the server side. I had
a pretty normal QOS client: a sequentially used searchlist,
configurable DNS::conf_retries, creating SERVFAIL cache entries if
all bets were off. Placing failing servers last in the SBELT, but
keeping them; i see a dangling TODO note for an additional place for
failing nameservers, to have them rest for a while.
|> This is even RFC 1034 as i see, SLIST and SBELT, whereas the latter
|> i filled in from "nameserver" as of /etc/resolv.conf, it should have
|> multiple, then. (Or point to localhost and have a true local resolver
|> or something like dnsmasq.)
|
|I completely agree that per all documentation, things SHOULD work with
|just the secondary. Yet my experience from recursive DNS server
|operator stand point, things don't work.
That is bad.
|> I do see DNS failures via Wifi but that is not the fault of DNS, but of
|> the provider i use.
|
|Sure. That's the nature of a wireless network. It's also unrelated to
|the symptoms I describe above.
Well, in fact i think there is something like a scheduler error
around there, too, because sometimes i am placed nowhere and get
no bits at all, most often at minute boundaries, and for minutes.
But well...
|> P.S.: actually the only three things i have ever hated about DNS, and
|> i came to that in 2004 with EDNS etc. all around yet, is that backward
|> compatibly has been chosen for domain names, and therefore we gained
|> IDNA, which is a terribly complicated brainfuck thing that actually
|> caused incompatibilities, but these where then waved through and ok.
|> That is absurd.
|
|I try to avoid dealing with IDNs. Fortunately I'm fairly lucky in \
|doing so.
I censor this myself. I for one would vote for UTF-7 with
a different ACE (ASCII compatible encoding) prefix, UTF-7 is
somewhat cheap to implement and used for, e.g., the pretty
omnipresent IMAP. I mean, i (but who am i) have absolutely
nothing against super smart algorithms somewhere, for example in
a software that domain registrars have to run in order to grant
domain names. But the domain name as such should just be it,
otherwise it is .. not. Yes that is pretty Hollywood but if
i mistype an ASCII hostname i also get a false result.
Anyway: fact is that for example German IDNA 2008 rules are
incompatible with the earlier ones: really, really bizarre.
|> And the third is DNSSEC, which i read the standard of and said "no".
|> Just last year or the year before that we finally got DNS via TCP/TLS
|> and DNS via DTLS, that is, normal transport security!
|
|My understanding is that DNSSEC provides verifiable authenticity of the
|information transported by DNSSEC. It's also my understanding that
|DTLS, DOH, DNScrypt, etc provide DNS /transport/ encryption
|(authentication & privacy).
|
|As I understand it, there is nothing about DTLS, DOH, DNScrypt, etc that
|prevent servers on the far end of said transports from modifying queries
|/ responses that pass through them, or serving up spoofed content.
|
|To me, DNSSEC serves a completely different need than DTLS, DOH,
|DNScrypt, etc.
|
|Please correct me and enlighten me if I'm wrong or have oversimplified
|things.
No, a part is that you can place signatures which can be used to
verify the content that is actually delivered. This is right.
And yes, this is different to transport layer security. And DNS
is different in that data is cached anywhere in the distributed
topology, so having the data ship with verifieable signatures may
sound sound. But when i look around this is not what is in use,
twenty years thereafter. I mean you can lookup netbsd.org and see
how it works, and if you have a resolver who can make use that
would be ok (mine can not), but as long as those signatures are
not mandatory they can be left out somewhere on the way, can they.
My VM hoster offers two nameservers and explicitly does not
support DNSSEC for example (or did so once we made the contract,
about 30 months ago).
Then again i wish (i mean, it is not that bad in respect to this,
politics or philosophy are something different) i could reach out
securely to the mentioned servers. In the end you need to put
trust in someone, i mean, most of you have smartphones, and some
processors run entire operating systems aside the running
operating system, and the one software producer who offers its
entire software open on a public hoster is condemned whereas
others who do not publish any source code are taken along to the
toilet and into the bedroom, that is anything but not in
particular in order.
So i am sitting behind that wall and have to take what i get.
And then i am absolutely convinced that humans make a lot of
faults, and anything that does not autoconfigure correctly is
prone to misconfiguration and errors, whatever may be the cause,
from a hoped-for girl friend to a dying suffering wife, you know.
I think i personally could agree with these signatures if the
transport would be secure, and if i could make a short single
connection to the root server of a zone and get the certificate
along with the plain TLS handshake. What i mean is, the DNS would
automatically provide signatures based on the very same key that
is used for TLS, and the time to live for all delivered records is
en par with the lifetime of that certificate at maximum.
No configuration at all, automatically secure, a lot code less to
maintain. And maybe even knowledge whether signatures are to be
expected for a zone, so that anything else can be treated as
spoofed.
All this is my personal blabla though. I think that thinking is
not bad, however. Anyway anybody who is not driving all the
server her- or himself is putting trust since decades and all the
time, and if i can have a secure channel to those people i (put)
trust (in) then this is a real improvement. If those people do
the same then i have absolutely zero problems with only encrypted
transport as opposed to open transport and signed data.
|> Twenty years too late, but i really had good days when i saw those
|> standards flying by! Now all we need are zone administrators which
|> publish certificates via DNS and DTLS and TCP/TLS consumers which can
|> integrate those in their own local pool (for at least their runtime).
|
|DANE has been waiting on that for a while.
|
|I think DANE does require DNSSEC. I think DKIM does too.
I have the RFCs around... but puuh. DKIM says
The DNS is proposed as the initial mechanism for the public keys.
Thus, DKIM currently depends on DNS administration and the security
of the DNS system. DKIM is designed to be extensible to other key
fetching services as they become available.
|> I actually hate the concept very, very much ^_^, for me it has
|> similarities with brainfuck. I could not use it.
|
|Okay. To each his / her own.
Of course.
|> Except that this will work only for all-english, as otherwise character
|> sets come into play, text may be in a different character set, mail
|> standards may impose a content-transfer encoding, and then what you are
|> looking at is actually a database, not the data as such.
|
|Hum. I've not considered non-English as I so rarely see it in raw email
|format.
|
|> This is what i find so impressive about that Plan9 approach, where
|> the individual subparts of the message are available as files in the
|> filesystem, subjects etc. as such, decoded and available as normal \
|> files.
|> I think this really is .. impressive.
|
|I've never had the pleasure of messing with Plan9. It's on my
|proverbial To-Do-Someday list. It does sound very interesting.
To me it is more about some of the concepts in there, i actually
cannot use it with the defaults. I cannot the editors Sam nor
Acme, and i do not actually like using the mouse, which is
a problem.
|> Thanks, eh, thanks. Time will bring.. or not.
|
|;-)
Oh, i am hoping for "will", but it takes a lot of time. It would
have been easier to start from scratch, and, well, in C++. I have
never been a real application developer, i am more for libraries
or say, interfaces which enable you to do something. Staying
a bit in the back, but providing support as necessary. Well.
I hope we get there.
|> It is of course not the email that leaves you no more. It is not just
|> headers are added to bring the traceroute path. I do have a bad feeling
|> with these, but technically i do not seem to have an opinion.
|
|I too have some unease with things while not having any technical
|objection to them.
|
|> Well, this adds the burden onto TUHS. Just like i have said.. but
|> you know, more and more SMTP servers connect directly via STARTSSL or
|> TCP/TLS right away. TUHS postfix server does not seem to do so on the
|> sending side
|
|The nature of email is (and has been) changing.
|
|We're no longer trying to use 486s to manage mainstream email. My
|opinion is that we have the CPU cycles to support current standards.
|
|> -- you know, i am not an administor, no earlier but on the 15th of March
|> this year i realized that my Postfix did not reiterate all the smtpd_*
|> variables as smtp_* ones, resulting in my outgoing client connections
|> to have an entirely different configuration than what i provided for
|> what i thought is "the server", then i did, among others
|>
|> smtpd_tls_security_level = may +smtp_tls_security_level =
|> $smtpd_tls_security_level
|
|Oy vey.
Hm.
|> But if TUHS did, why should it create a DKIM signature? Ongoing is the
|> effort to ensure SMTP uses TLS all along the route, i seem to recall i
|> have seen RFCs pass by which accomplish that. Or only drafts?? Hmmm.
|
|SMTP over TLS (STARTTLS) is just a transport. I can send anything I
|want across said transport.
That is true.
|DKIM is about enabling downstream authentication in email. Much like
|DNSSEC does for DNS.
|
|There is nothing that prevents sending false information down a secure
|communications channel.
I mean, that argument is an all destroying hammer. But it is
true. Of course.
|> Well S/MIME does indeed specify this mode of encapsulating the entire
|> message including the headers, and enforce MUAs to completely ignore the
|> outer envelope in this case. (With a RFC discussing problems of this
|> approach.)
|
|Hum. That's contrary to my understanding. Do you happen to have RFC
|and section numbers handy? I've wanted to, needed to, go read more on
|S/MIME. It now sounds like I'm missing something.
In draft "Considerations for protecting Email header with S/MIME"
by Melnikov there is
[RFC5751] describes how to protect the Email message
header [RFC5322], by wrapping the message inside a message/rfc822
container [RFC2045]:
In order to protect outer, non-content-related message header
fields (for instance, the "Subject", "To", "From", and "Cc"
fields), the sending client MAY wrap a full MIME message in a
message/rfc822 wrapper in order to apply S/MIME security services
to these header fields. It is up to the receiving client to
decide how to present this "inner" header along with the
unprotected "outer" header.
When an S/MIME message is received, if the top-level protected
MIME entity has a Content-Type of message/rfc822, it can be
assumed that the intent was to provide header protection. This
entity SHOULD be presented as the top-level message, taking into
account header merging issues as previously discussed.
I am a bit behind, the discussion after the Efail report this
spring has revealed some new development to me that i did not know
about, and i have not yet found the time to look at those.
|I wonder if the same can be said about PGP.
Well yes, i think the same is true for such, i have already
received encrypted mails which ship a MIME multipart message, the
first being 'Content-Type: text/rfc822-headers;
protected-headers="v1"', the latter providing the text.
|> The BSD Mail clone i maintain does not support this yet, along with \
|> other
|> important aspects of S/MIME, like the possibility to "self-encrypt" (so
|> that the message can be read again, a.k.a. that the encrypted version
|> lands on disk in a record, not the plaintext one). I hope it will be
|> part of the OpenPGP, actually privacy rewrite this summer.
|
|I thought that many MUAs handled that problem by adding the sender as an
|additional recipient in the S/MIME structure. That way the sender could
|extract the ephemeral key using their private key and decrypt the
|encrypted message that they sent.
Yes, that is how this is done. The former maintainer implemented
this rather as something like GnuPG's --hidden-recipient option,
more or less: each receiver gets her or his own S/MIME encrypted
copy. The call chain itself never sees such a mail.
--End of <c4debab5-181d-c071-850a-a8865d6e98d0@spamtrap.tnetconsulting.net>
Grant Taylor via TUHS wrote in <5da463dd-fb08-f601-68e3-197e720d5716@spa\
mtrap.tnetconsulting.net>:
|On 06/25/2018 10:10 AM, Steffen Nurpmeso wrote:
|> DKIM reuses the *SSL key infrastructure, which is good.
|
|Are you saying that DKIM relies on the traditional PKI via CA
|infrastructure? Or are you saying that it uses similar technology that
|is completely independent of the PKI / CA infrastructure?
I mean it uses those *SSL tools and thus an infrastructure that is
standardized as such and very widely used, seen by many eyes.
|> (Many eyes see the code in question.) It places records in DNS, which
|> is also good, now that we have DNS over TCP/TLS and (likely) DTLS.
|> In practice however things may differ and to me DNS security is all in
|> all not given as long as we get to the transport layer security.
|
|I believe that a secure DNS /transport/ is not sufficient. Simply
|security the communications channel does not mean that the entity on the
|other end is not lying. Particularly when not talking to the
|authoritative server, likely by relying on caching recursive resolvers.
That record distribution as above, yes, but still: caching those
TSIGs or what their name was is not mandatory i think.
|> I personally do not like DKIM still, i have opendkim around and
|> thought about it, but i do not use it, i would rather wish that public
|> TLS certificates could also be used in the same way as public S/MIME
|> certificates or OpenPGP public keys work, then only by going to a TLS
|> endpoint securely once, there could be end-to-end security.
|
|All S/MIME interactions that I've seen do use certificates from a well
|know CA via the PKI.
|
|(My understanding of) what you're describing is encryption of data in
|flight. That does nothing to encrypt / protect data at rest.
|
|S/MIME /does/ provide encryption / authentication of data in flight
|/and/ data at rest.
|
|S/MIME and PGP can also be used for things that never cross the wire.
As above, i just meant that if the DNS server is TLS protected,
that key could be used to automatically sign the entire zone data,
so that the entire signing and verification is automatic and can
be deduced from the key used for and by TLS. Using the same
library, a single configuration line etc. Records could then be
cached anywhere just the same as now. Just an idea.
You can use self-signed S/MIME, you can use an OpenPGP key without
any web of trust. Different to the latter, where anyone can upload
her or his key to the pool of OpenPGP (in practice rather GnuPG only
i would say) servers (sks-keyservers.net, which used a self-signed
TLS certificate last time i looked btw., fingerprint
79:1B:27:A3:8E:66:7F:80:27:81:4D:4E:68:E7:C4:78:A4:5D:5A:17),
there is no such possibility for the former. So whereas everybody
can look for the fingerprint i claim via hkps:// and can be pretty
sure that this really is me, no such thing exists for S/MIME.
(I for one was very disappointed when the new German passport had
been developed around Y2K, and no PGP and S/MIME was around. The
Netherlands i think are much better in that. But this is
a different story.)
But i think things happen here, that HTTP .well-known/ mechanism
seems to come into play for more and more things, programs learn
to use TOFU (trust on first use), and manage a dynamic local pool
of trusted certificates. (I do not know exactly, but i have seen
fly by messages which made me think the mutt MUA put some work
into that, for example.) So this is decentralized, then.
|> I am not a cryptographer, however. (I also have not read the TLS v1.3
|> standard which is about to become reality.) The thing however is that
|> for DKIM a lonesome user cannot do anything -- you either need to have
|> your own SMTP server, or you need to trust your provider.
|
|I don't think that's completely accurate. DKIM is a method of signing
|(via cryptographic hash) headers as you see (send) them. I see no
|reason why a client can't add DKIM headers / signature to messages it
|sends to the MSA.
|
|Granted, I've never seen this done. But I don't see anything preventing
|it from being the case.
|
|> But our own user interface is completely detached. (I mean, at least
|> if no MTA is used one could do the DKIM stuff, too.)
|
|I know that it is possible to do things on the receiving side. I've got
|the DKIM Verifier add-on installed in Thunderbird, which gives me client
|side UI indication if the message that's being displayed still passes
|DKIM validation or not. The plugin actually calculates the DKIM hash
|and compares it locally. It's not just relying on a header added by
|receiving servers.
I meant that the MUA could calculate the DKIM stuff itself, but
this works only if the MTA does not add or change headers. That
is what i referred to, but DKIM verification could be done by
a MUA, if it could. Mine cannot.
--End of <5da463dd-fb08-f601-68e3-197e720d5716@spamtrap.tnetconsulting.net>
Grant Taylor via TUHS wrote in <8c0da561-f786-8039-d2fc-907f2ddd09e3@spa\
mtrap.tnetconsulting.net>:
|On 06/25/2018 10:26 AM, Steffen Nurpmeso wrote:
|> I have never understood why people get excited about Maildir, you have
|> trees full of files with names which hurt the eyes,
|
|First, a single Maildir is a parent directory and three sub-directories.
| Many Maildir based message stores are collections of multiple
|individual Maildirs.
This is Maildir+ i think was the name, yes.
|Second, the names may look ugly, but they are named independently of the
|contents of the message.
|
|Third, I prefer the file per message as opposed to monolithic mbox for
|performance reasons. Thus I message corruption impacts a single message
|and not the entire mail folder (mbox).
Corruption should not happen, then. This is true for each
database or repository it think.
|Aside: I already have a good fast database that most people call a file
|system and it does a good job tracking metadata for me.
This can be true, yes. I know of files systems which get very
slow when there are a lot of files in a directory, or which even
run into limits in the worst case. (I have indeed once seen the
latter on a FreeBSD system with which i though were good file
system defaults.)
|Fourth, I found maildir to be faster on older / slower servers because
|it doesn't require copying (backing up) the monolithic mbox file prior
|to performing an operation on it. It splits reads & writes into much
|smaller chunks that are more friendly (and faster) to the I/O sub-system.
I think this depends. Mostly anything incoming is an appending
write, in my use case then everything is moved away in one go,
too. Then it definetely depends on the disks you have. Before
i was using a notebook which had a 3600rpm hard disk, then you
want it compact. And then it also depends on the cleverness of
the software. Unfortunately my MUA cannot, but you could have an
index like git has or like the already mentioned Zawinski
describes as it was used for Netscape. I think the i think
Enterprise Mail server dovecot also uses MBOX plus index by
default, but i could be mistaken. I mean, if you control the file
you do not need to perform an action immediately, for example --
and then, when synchronization time happens, you end up with
a potentially large write on a single file, instead of having to
fiddle around with a lot of files. But your experience may vary.
|Could many of the same things be said about MH? Very likely. What I
|couldn't say about MH at the time I went looking and comparing (typical)
|unix mail stores was the readily available POP3 and IMAP interface.
|Seeing as how POP3 and IMAP were a hard requirement for me, MH was a
|non-starter.
|
|> they miss a From_ line (and are thus not valid MBOX files, i did not
|> miss that in the other mail).
|
|True. Though I've not found that to be a limitation. I'm fairly
|confident that the Return-Path: header that is added / replaced by my
|MTA does functionally the same thing.
With From_ line i meant that legendary line that i think was
introduced in Unix V5 mail and which prepends each mail message in
a MBOX file, and which causes that ">From" quoting at the
beginning of lines of non MIME-willing (or so configured) MUAs.
|I'm sure similar differences can be had between many different solutions
|in Unix's history. SYS V style init.d scripts vs BSD style /etc/rc\d
|style init scripts vs systemd or Solaris's SMD (I think that's it's name).
Oh! I am happy to have the former two on my daily machines again,
and i have been reenabled to tell you what is actually going on
there (in user space), even without following some external
software development. Just in case of interest.
|To each his / her own.
It seems you will never be able to 1984 that from the top,
especially not over time.
--End of <8c0da561-f786-8039-d2fc-907f2ddd09e3@spamtrap.tnetconsulting.net>
Ciao.
--steffen
|
|Der Kragenbaer, The moon bear,
|der holt sich munter he cheerfully and one by one
|einen nach dem anderen runter wa.ks himself off
|(By Robert Gernhardt)
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] off-topic list
2018-06-22 14:54 ` Larry McVoy
2018-06-22 15:17 ` Steffen Nurpmeso
@ 2018-06-22 16:07 ` Tim Bradshaw
2018-06-22 16:36 ` Steve Johnson
2018-06-22 20:55 ` Bakul Shah
1 sibling, 2 replies; 133+ messages in thread
From: Tim Bradshaw @ 2018-06-22 16:07 UTC (permalink / raw)
To: Larry McVoy; +Cc: The Eunuchs Hysterical Society
[-- Attachment #1: Type: text/plain, Size: 626 bytes --]
On 22 Jun 2018, at 15:54, Larry McVoy <lm@mcvoy.com> wrote:
>
> Try mutt, it's what I use and it threads topics just fine.
The trouble is I have > 10 years of mail sitting in the OSX mail system and although I could probably export it all (it at least used to be relatively straightforward to do this) the sheer terror of doing that is, well, terrifying because there's a lot of stuff in there that matters.
Using the system-provided mail tool was a stupid decision, and one I managed to avoid with the browser &c, but it's too late now.
(Now this is an off-topic discussion from a discussion about off-topicness.)
[-- Attachment #2: Type: text/html, Size: 1481 bytes --]
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] off-topic list
2018-06-22 16:07 ` Tim Bradshaw
@ 2018-06-22 16:36 ` Steve Johnson
2018-06-22 20:55 ` Bakul Shah
1 sibling, 0 replies; 133+ messages in thread
From: Steve Johnson @ 2018-06-22 16:36 UTC (permalink / raw)
To: Tim Bradshaw, Larry McVoy; +Cc: The Eunuchs Hysterical Society
[-- Attachment #1: Type: text/plain, Size: 1241 bytes --]
I, for one, am happy to see some off-topic stuff. The people on
this list, for the most part, represent a way of looking at the world
that is,
sadly, rather uncommon these days. I enjoy looking at more
contemporary events through those eyes, and musing on the changes that
have taken place...
Steve
----- Original Message -----
From:
"Tim Bradshaw" <tfb@tfeb.org>
To:
"Larry McVoy" <lm@mcvoy.com>
Cc:
"The Eunuchs Hysterical Society" <tuhs@tuhs.org>
Sent:
Fri, 22 Jun 2018 17:07:57 +0100
Subject:
Re: [TUHS] off-topic list
On 22 Jun 2018, at 15:54, Larry McVoy <lm@mcvoy.com [1]> wrote:
Try mutt, it's what I use and it threads topics just fine.
The trouble is I have > 10 years of mail sitting in the OSX mail
system and although I could probably export it all (it at least used
to be relatively straightforward to do this) the sheer terror of doing
that is, well, terrifying because there's a lot of stuff in there that
matters.
Using the system-provided mail tool was a stupid decision, and one I
managed to avoid with the browser &c, but it's too late now.
(Now this is an off-topic discussion from a discussion about
off-topicness.)
Links:
------
[1] mailto:lm@mcvoy.com
[-- Attachment #2: Type: text/html, Size: 2215 bytes --]
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] off-topic list
2018-06-22 16:07 ` Tim Bradshaw
2018-06-22 16:36 ` Steve Johnson
@ 2018-06-22 20:55 ` Bakul Shah
1 sibling, 0 replies; 133+ messages in thread
From: Bakul Shah @ 2018-06-22 20:55 UTC (permalink / raw)
To: Tim Bradshaw; +Cc: The Eunuchs Hysterical Society
On Jun 22, 2018, at 9:07 AM, Tim Bradshaw <tfb@tfeb.org> wrote:
>
> On 22 Jun 2018, at 15:54, Larry McVoy <lm@mcvoy.com> wrote:
>>
>> Try mutt, it's what I use and it threads topics just fine.
>
> The trouble is I have > 10 years of mail sitting in the OSX mail system and although I could probably export it all (it at least used to be relatively straightforward to do this) the sheer terror of doing that is, well, terrifying because there's a lot of stuff in there that matters.
At least on my Mac messages seem to be stored one per file.
Attachments are stored separately. Doesn't seem hard to figure out.
You can periodically rsync to a different place and experiment.
> Using the system-provided mail tool was a stupid decision, and one I managed to avoid with the browser &c, but it's too late now.
There is not much available that is decent.I too use Apple Mail
but also have a separate MH store that goes way back. MH is great
for bulk operations but not for viewing MIME infested mail or
simple things like attaching documents. What I really want is a
combined MUA.
> (Now this is an off-topic discussion from a discussion about off-topicness.)
meta-meta-meta!
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] off-topic list
2018-06-22 14:28 ` Larry McVoy
2018-06-22 14:46 ` Tim Bradshaw
@ 2018-06-22 14:52 ` Ralph Corderoy
2018-06-22 15:13 ` SPC
2018-06-22 16:45 ` Larry McVoy
2018-06-22 15:28 ` Clem Cole
` (2 subsequent siblings)
4 siblings, 2 replies; 133+ messages in thread
From: Ralph Corderoy @ 2018-06-22 14:52 UTC (permalink / raw)
To: tuhs
Larry wrote:
> For the record, I'm fine with old stuff getting discussed on TUHS.
> Even not Unix stuff.
+1
> I think this list has self selected for adults who are reasonable.
> So we mostly are fine.
And when it's trying to recreate the Usenet glory days of `Is this the
longest thread ever?', Warren steps in to admonish.
Cheers, Ralph.
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] off-topic list
2018-06-22 14:52 ` Ralph Corderoy
@ 2018-06-22 15:13 ` SPC
2018-06-22 16:45 ` Larry McVoy
1 sibling, 0 replies; 133+ messages in thread
From: SPC @ 2018-06-22 15:13 UTC (permalink / raw)
To: ralph; +Cc: The Eunuchs Hysterical Society
[-- Attachment #1: Type: text/plain, Size: 802 bytes --]
El vie., 22 jun. 2018 a las 16:59, Ralph Corderoy (<ralph@inputplus.co.uk>)
escribió:
> Larry wrote:
> > For the record, I'm fine with old stuff getting discussed on TUHS.
> > Even not Unix stuff.
>
Me too. But...
Just in case, try something like 'old-iron', 'tuhs-old-iron-chat',
'tuhs-alt-chat'...
Anyway, as more-or-less community manager on modern social networks, I
think it won't be easy to separate the topics of both lists.
Gracias | Regards - Saludos | Greetings | Freundliche Grüße | Salutations
--
*Sergio Pedraja*
--
http://www.linkedin.com/in/sergiopedraja
-----
No crea todo lo que ve, ni crea que está viéndolo todo
-----
"El estado de una Copia de Seguridad es desconocido
hasta que intentas restaurarla" (- nixCraft)
-----
[-- Attachment #2: Type: text/html, Size: 4882 bytes --]
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] off-topic list
2018-06-22 14:52 ` Ralph Corderoy
2018-06-22 15:13 ` SPC
@ 2018-06-22 16:45 ` Larry McVoy
1 sibling, 0 replies; 133+ messages in thread
From: Larry McVoy @ 2018-06-22 16:45 UTC (permalink / raw)
To: Ralph Corderoy; +Cc: tuhs
On Fri, Jun 22, 2018 at 03:52:14PM +0100, Ralph Corderoy wrote:
> And when it's trying to recreate the Usenet glory days of `Is this the
> longest thread ever?', Warren steps in to admonish.
I've been trying (not always succeeding) to not help things get to that
point. Warren is awesome but we all should have a goal of not "needing"
him to step in.
--lm
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] off-topic list
2018-06-22 14:28 ` Larry McVoy
2018-06-22 14:46 ` Tim Bradshaw
2018-06-22 14:52 ` Ralph Corderoy
@ 2018-06-22 15:28 ` Clem Cole
2018-06-22 17:17 ` Grant Taylor via TUHS
2018-06-22 18:00 ` Dan Cross
4 siblings, 0 replies; 133+ messages in thread
From: Clem Cole @ 2018-06-22 15:28 UTC (permalink / raw)
To: Larry McVoy; +Cc: The Eunuchs Hysterical Society
[-- Attachment #1: Type: text/plain, Size: 820 bytes --]
On Fri, Jun 22, 2018 at 10:28 AM, Larry McVoy <lm@mcvoy.com> wrote:
> I think this list has self selected for adults
...right ... problem is if some one never grew up how would they know ....
Seriously, I think you pretty much nailed it. It is about being
respectful of everyone on the list, particularly those where were there and
lived the history, it allows us all to learn from some fun memories and
broaden our understanding beyond what we thought we knew as complete
'fact'. I've enjoyed filling in tid bits I've collected along my strange
journey, as well as having others clue me in on some details I did not
know. As Larry said, it really is an interesting community and think
this list has helped to set the historical record straight in a couple of
places that I am aware.
ᐧ
[-- Attachment #2: Type: text/html, Size: 1808 bytes --]
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] off-topic list
2018-06-22 14:28 ` Larry McVoy
` (2 preceding siblings ...)
2018-06-22 15:28 ` Clem Cole
@ 2018-06-22 17:17 ` Grant Taylor via TUHS
2018-06-22 18:00 ` Dan Cross
4 siblings, 0 replies; 133+ messages in thread
From: Grant Taylor via TUHS @ 2018-06-22 17:17 UTC (permalink / raw)
To: tuhs
[-- Attachment #1: Type: text/plain, Size: 1424 bytes --]
On 06/22/2018 08:28 AM, Larry McVoy wrote:
> For the record, I'm fine with old stuff getting discussed on TUHS.
> Even not Unix stuff.
I agree.
It seems like a number of people are okay with non-TUHS specific topis
being discussed on TUHS. I'm sure there are others that want non-TUHS
specific topics to stay off TUHS. I feel like each group is entitled to
their opinions.
So, perhaps we could leverage technology to satisfy both groups.
I believe that Warren hosts TUHS using Mailman. I'm fairly certain that
Mailman supports topics / key words. Thus we could possible configure
Mailman to support such topics and allow subscribers to select which
topics they care about and the ever so important if they want to receive
messages that don't match any defined topics.
> We wandered into Linux/ext2 with Ted, that was fun. We've wandered
> into the VAX history (UWisc got an 8600 and called it "speedy" - what a
> mistake) and I learned stuff I didn't know, so that was fun (and sounded
> just like history at every company I've worked for, sadly :)
I routinely find things being discussed that I didn't know I was
interested in, but learned that I was while reading. I almost always
learn something from every (major) thread.
> I think this list has self selected for adults who are reasonable.
> So we mostly are fine.
:-)
--
Grant. . . .
unix || die
[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 3982 bytes --]
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] off-topic list
2018-06-22 14:28 ` Larry McVoy
` (3 preceding siblings ...)
2018-06-22 17:17 ` Grant Taylor via TUHS
@ 2018-06-22 18:00 ` Dan Cross
4 siblings, 0 replies; 133+ messages in thread
From: Dan Cross @ 2018-06-22 18:00 UTC (permalink / raw)
To: Larry McVoy; +Cc: The Eunuchs Hysterical Society
[-- Attachment #1: Type: text/plain, Size: 1417 bytes --]
On Fri, Jun 22, 2018 at 10:29 AM Larry McVoy <lm@mcvoy.com> wrote:
> For the record, I'm fine with old stuff getting discussed on TUHS.
> Even not Unix stuff.
With the caveat that I must acknowledge that I've been guilty of wandering
off topic more often than I should, it occurs to me that when discussing
the history of something, very often the histories of other things
necessarily feed into that history and become intertwined and inseparable.
There was a lot of "stuff" happening in the computer industry at the time
Unix was created, in it's early years, and on in its (continuing)
evolution, and that "stuff" surely impacted Unix in one way or another. If
we really want to understand where Unix came from and why it is, we must
open ourselves to understanding those influences as well.
That said, of course, there's a balance. Having a place one could point to
and respectfully say, "Hey, this has gone on for 50+ messages; could you
move it over to off-tuhs?" might be useful for folks who want to deep-dive
on something.
We wandered into Linux/ext2 with Ted, that was fun.
>
Indeed. I'll go further and confess that I use TUHS as a learning resource
that influences by work professionally. There's a lot of good information
that comes across this list that gets filed away in my brain and manifests
itself in surprising ways by informing my work. I selfishly want that to
continue.
- Dan C.
[-- Attachment #2: Type: text/html, Size: 2011 bytes --]
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] off-topic list
2018-06-22 4:18 ` Dave Horsfall
2018-06-22 11:44 ` Arthur Krewat
2018-06-22 14:28 ` Larry McVoy
@ 2018-06-22 17:29 ` Cág
2 siblings, 0 replies; 133+ messages in thread
From: Cág @ 2018-06-22 17:29 UTC (permalink / raw)
To: tuhs
Dave Horsfall wrote:
> COFF - Computer Old Fart Followers.
Is it "Old farts who follow computers" or "Followers of computer old
farts"? Or "old farts who follow other old farts"? :)
--
caóc
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
@ 2018-06-21 17:40 Noel Chiappa
0 siblings, 0 replies; 133+ messages in thread
From: Noel Chiappa @ 2018-06-21 17:40 UTC (permalink / raw)
To: tuhs; +Cc: jnc
> From: Tim Bradshaw
> There is a paper, by Flowers, in the Annals of the History of Computing
> which discusses a lot of this. I am not sure if it's available online
Yes:
http://www.ivorcatt.com/47c.htm
(It's also available behind the IEEE pay-wall.) That issue of the Annals (5/3)
has a couple of articles about Colossus; the Coombs one on the design is
available here:
http://www.ivorcatt.com/47d.htm
but doesn't have much on the reliability issues; the Chandler one on
maintenance has more, but alas is only available behind the paywall:
https://www.computer.org/csdl/mags/an/1983/03/man1983030260-abs.html
Brian Randell did a couple of Colossus articles (in "History of Computing in
the 20th Century" and "Origin of Digital Computers") for which he interviewed
Flowers and others, there may be details there too.
Noel
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
@ 2018-06-21 17:07 Nelson H. F. Beebe
0 siblings, 0 replies; 133+ messages in thread
From: Nelson H. F. Beebe @ 2018-06-21 17:07 UTC (permalink / raw)
To: tuhs
Tim Bradshaw <tfb@tfeb.org> commented on a paper by Tommy Flowers on
the design of the Colossus: here is the reference:
Thomas H. Flowers, The Design of Colossus, Annals of the
History of Computing 5(3) 239--253 July/August 1983
https://doi.org/10.1109/MAHC.1983.10079
Notice that it appeared in the Annnals..., not the successor journal
IEEE Annals....
There is a one-column obituary of Tommy Flowers at
http://doi.ieeecomputersociety.org/10.1109/MC.1998.10137
Last night, I finished reading this recent book:
Thomas Haigh and Mark (Peter Mark) Priestley and Crispin Rope
ENIAC in action: making and remaking the modern computer
MIT Press 2016
ISBN 0-262-03398-4
https://doi.org/10.7551/mitpress/9780262033985.001.0001
It has extensive commentary about the ENIAC at the Moore School of
Engineering at the University of Pennsylvania in Philadelphia, PA. Its
construction began in 1943, with a major instruction set redesign in
1948, and was shutdown permanently on 2 October 1955 at 23:45.
The book notes that poor reliability of vacuum tubes and thousands of
soldered connections was a huge problem, and in the early years, only
about 1 hour out of 24 was devoted to useful runs; the rest of the
time was used for debuggin, problem setup (which required wiring
plugboards), testing, and troubleshooting. Even so, runs generally
had to be repeated to verify that the same answers could be obtained:
often, they differed.
The book also reports that reliability was helped by never turning off
power: tubes were more susceptible to failure when power was restored.
The book reports that reliability of the ENIAC improved significantly
when on 28 April 1948, Nick Metropolis (co-inventor of the famous
Monte Carlo method) had the clock rate reduced from 100kHz to 60kHz.
It was only several years later that, with vacuum tube manufacturing
improvements, the clock rate was eventually moved back to 100Khz.
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe@math.utah.edu -
- 155 S 1400 E RM 233 beebe@acm.org beebe@computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
@ 2018-06-21 13:46 Doug McIlroy
2018-06-21 16:13 ` Tim Bradshaw
0 siblings, 1 reply; 133+ messages in thread
From: Doug McIlroy @ 2018-06-21 13:46 UTC (permalink / raw)
To: tuhs
Tim Bradshaw wrote:
"Making tube (valve) machines reliable was something originally sorted out by Tommy Flowers, who understood, and convinced people with some difficulty I think, that you could make a machine with one or two thousand valves (1,600 for Mk 1, 2,400 for Mk 2) reliable enough to use, and then produced ten or eleven Colossi from 1943 on which were used to great effect in. So by the time of Whirlwind it was presumably well-understood that this was possible."
"Colossus: The Secrest of Bletchley Park's Codebreaking Computers"
by Copeland et al has little to say about reliability. But one
ex-operator remarks, "Often the machine broke down."
Whether it was the (significant) mechanical part or the electronics
that typically broke is unclear. Failures in a machine that's always
doing the same thing are easier to detect quickly than failures in
a mchine that has a varied load. Also the task at hand could fail
for many other reasons (e.g. mistranscribed messages) so there was
no presumption of correctness of results--that was determined by
reading the decrypted messages. So I think it's a stretch to
argue that reliability was known to be a manageable issue.
Doug
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
2018-06-21 13:46 Doug McIlroy
@ 2018-06-21 16:13 ` Tim Bradshaw
0 siblings, 0 replies; 133+ messages in thread
From: Tim Bradshaw @ 2018-06-21 16:13 UTC (permalink / raw)
To: Doug McIlroy; +Cc: tuhs
On 21 Jun 2018, at 14:46, Doug McIlroy <doug@cs.dartmouth.edu> wrote:
>
> Whether it was the (significant) mechanical part or the electronics
> that typically broke is unclear. Failures in a machine that's always
> doing the same thing are easier to detect quickly than failures in
> a mchine that has a varied load. Also the task at hand could fail
> for many other reasons (e.g. mistranscribed messages) so there was
> no presumption of correctness of results--that was determined by
> reading the decrypted messages. So I think it's a stretch to
> argue that reliability was known to be a manageable issue.
I think it's reasonably well-documented that Flowers understood things about making valves (tubes) reliable that were not previously known: Colossus was well over ten times larger than any previous valve-based system at the time and there was huge scepticism about making it work, to the extent that he funded the initial one substantially himself as BP wouldn't. For instance he understood that you should never turn the things off: even quite significant maintenance was done on Colossi with them on (I believe they were kept on even when the room flooded on occasion, which led to fairly exciting electrical conditions). He also did things with heaters (the heater voltage was ramped up & down to avoid thermal stresses when powering things on or off) and other tricks such as soldering the valves into their bases to avoid connector problems.
The mechanical part (the tape reader) was in fact one significant reason for Colossus: the previous thing, Heath Robinson, had used two tapes which needed to be kept in sync (or, rather, needed to be allowed to drift out of sync in a controlled way with respect to each other), and this did not work well at all as the tapes would stretch & break. Colossus generated one of the tapes (the one corresponding to the Lorenz machine's settings) electronically and would then sync itself to the tape with the message text on it.
It's also not the case that the Colossi did only one thing: they were not general-purpose machines but they were more general-purpose than they needed to be and people found all sorts of ways of getting them to compute properties they had not originally been intended to compute. I remember talking to Donald Michie about this (he was one of the members of the Newmanry which was the bit of BP where the Colossi were).
There is a paper, by Flowers, in the Annals of the History of Computing which discusses a lot of this. I am not sure if it's available online (or where my copy is).
Further, of course, you can just ask people who run one about reliability: there's a reconstructed one at TNMOC which is well worth seeing (there's a Heath Robinson as well), and the people there are very willing to talk about reliability -- I learnt about the heater-voltage-ramping thing by asking what the huge rheostat was for, and later seeing it turned on in the morning -- one of the problems with the reconstruction is that they are required to to turn it off at night (this is the same problem that afflicts people who look after steam locomotives. which in real life would almost never have been allowed to get cold).
As I said, I don't want to detract from Whirlwind in any way, but it is the case that Tommy Flowers did sort out significant aspects of the reliability of relatively large valve systems.
--tim
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
@ 2018-06-20 20:11 Doug McIlroy
2018-06-20 23:53 ` Tim Bradshaw
0 siblings, 1 reply; 133+ messages in thread
From: Doug McIlroy @ 2018-06-20 20:11 UTC (permalink / raw)
To: tuhs
> > Yet late in his life Forrester told me that the Whirlwind-connected
> > invention he was most proud of was marginal testing
> I'm totally gobsmacked to hear that. Margin testing was
> important, yes, but not even remotely on the same quantum
> level as core.
> In trying to understand why he said that, I can only suppose that he felt that
> core was 'the work of many hands'...and so he only deserved a share of the
> `credit for it.
It is indeed a striking comment. Forrester clearly had grave concerns
about the reliability of such a huge aggregation of electronics. I think
jpl gets to the core of the matter, regardless of national security:
> Whirlwind ... was tube based, and I think there was a tradeoff of speed,
> as determined by power, and tube longevity. Given the purpose, early
> warning of air attack, speed was vital, but so, too, was keeping it alive.
> So a means of finding a "sweet spot" was really a matter of national
> security. I can understand Forrester's pride in that context.
If you extrapolate the rate of replacement of vacuum tubes in a 5-tube
radio to a 5000-tube computer (say nothing of the 50,000-tube machines
for which Whirlwind served as a prototype), computing looks like a
crap shoot. In fact, thanks to the maintenance protocol, Whirlwind
computed reliably--a sine qua non for the nascent industry.
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
2018-06-20 20:11 Doug McIlroy
@ 2018-06-20 23:53 ` Tim Bradshaw
0 siblings, 0 replies; 133+ messages in thread
From: Tim Bradshaw @ 2018-06-20 23:53 UTC (permalink / raw)
To: Doug McIlroy; +Cc: tuhs
Making tube (valve) machines reliable was something originally sorted out by Tommy Flowers, who understood, and convinced people with some difficulty I think, that you could make a machine with one or two thousand valves (1,600 for Mk 1, 2,400 for Mk 2) reliable enough to use, and then produced ten or eleven Colossi from 1943 on which were used to great effect in. So by the time of Whirlwind it was presumably well-understood that this was possible.
(This is not meant to detract from what Whirlwind did in any way.)
> On 20 Jun 2018, at 21:11, Doug McIlroy <doug@cs.dartmouth.edu> wrote:
>
> If you extrapolate the rate of replacement of vacuum tubes in a 5-tube
> radio to a 5000-tube computer (say nothing of the 50,000-tube machines
> for which Whirlwind served as a prototype), computing looks like a
> crap shoot. In fact, thanks to the maintenance protocol, Whirlwind
> computed reliably--a sine qua non for the nascent industry.
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
@ 2018-06-19 12:23 Noel Chiappa
2018-06-19 12:58 ` Clem Cole
2018-06-21 14:09 ` Paul Winalski
0 siblings, 2 replies; 133+ messages in thread
From: Noel Chiappa @ 2018-06-19 12:23 UTC (permalink / raw)
To: tuhs; +Cc: jnc
> From: Doug McIlroy <doug@cs.dartmouth.edu>
> Core memory wiped out competing technologies (Williams tube, mercury
> delay line, etc) almost instantly and ruled for over twent years.
I never lived through that era, but reading about it, I'm not sure people now
can really fathom just how big a step forward core was - how expensive, bulky,
flaky, low-capacity, etc, etc prior main memory technologies were.
In other words, there's a reason they were all dropped like hot potatoes in
favour of core - which, looked at from our DRAM-era perspective, seems
quaintly dinosaurian. Individual pieces of hardware you can actually _see_
with the naked eye, for _each_ bit? But that should give some idea of how much
worse everything before it was, that it killed them all off so quickly!
There's simply no question that without core, computers would not have
advanced (in use, societal importance, technical depth, etc) at the speed they
did without core. It was one of the most consequential steps in the
development of computers to what they are today: up there with transistors,
ICs, DRAM and microprocessors.
> Yet late in his life Forrester told me that the Whirlwind-connected
> invention he was most proud of was marginal testing
Given the above, I'm totally gobsmacked to hear that. Margin testing was
important, yes, but not even remotely on the same quantum level as core.
In trying to understand why he said that, I can only suppose that he felt that
core was 'the work of many hands', which it was (see e.g. "Memories That
Shaped an Industry", pg. 212, and the things referred to there), and so he
only deserved a share of the credit for it.
Is there any other explanation? Did he go into any depth as to _why_ he felt
that way?
Noel
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
2018-06-19 12:23 Noel Chiappa
@ 2018-06-19 12:58 ` Clem Cole
2018-06-19 13:53 ` John P. Linderman
2018-06-21 14:09 ` Paul Winalski
1 sibling, 1 reply; 133+ messages in thread
From: Clem Cole @ 2018-06-19 12:58 UTC (permalink / raw)
To: Noel Chiappa; +Cc: The Eunuchs Hysterical Society
[-- Attachment #1: Type: text/plain, Size: 2092 bytes --]
On Tue, Jun 19, 2018 at 8:23 AM, Noel Chiappa <jnc@mercury.lcs.mit.edu>
wrote:
> > From: Doug McIlroy <doug@cs.dartmouth.edu>
>
> > Yet late in his life Forrester told me that the Whirlwind-connected
> > invention he was most proud of was marginal testing
>
> Given the above, I'm totally gobsmacked to hear that. Margin testing was
> important, yes, but not even remotely on the same quantum level as core.
Wow -- I had exactly the same reaction. To me, core was the second
most important invention (semiconductors switching being he first) for
making computing practical. I was thinking that systems must have been
really bad (worse than I knew) from a reliability stand point if he put
marginal testing up there as more important than core.
Like you, I thought core memory was pretty darned important. I never used
a system that had Williams tubes, although we had one in storage so I knew
what it looked like and knew how much more 'dense' core was compared to
it. Which is pretty amazing still compare today. For the modern user,
the IBM 360 a 1M core box (which we had 4) was made up of 4 19" relay
racks, each was about 54" high and 24" deep. If you go to
CMU Computer Photos from Chris Hausler
<http://www.silogic.com/Athena/CMU%20Photos%20from%20Chris%20Hausler.html>
and scroll down you can see some pictures of the old 360 (including a
copy of me in them circa 75/76 in front of it) to gage the size).
FWIW:
I broke in with MECL which Motorola invented / developed for IBM for System
360 and it (and TTL) were the first logic families I learned with which to
design. I remember the margin pots on the front of the 360 that we used
when we were trying to find weak gates, which happened about ones every 10
days.
The interesting part to me is that I'm suspect the PDP-10's and the Univac
1108 broke as often as the 360 did, but I have fewer memories of chasing
problems with them. Probably because it was a less of an issue that was
causing so many people to be disrupted by the 'down' time.
ᐧ
[-- Attachment #2: Type: text/html, Size: 4724 bytes --]
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
2018-06-19 12:58 ` Clem Cole
@ 2018-06-19 13:53 ` John P. Linderman
0 siblings, 0 replies; 133+ messages in thread
From: John P. Linderman @ 2018-06-19 13:53 UTC (permalink / raw)
To: Clem Cole; +Cc: The Eunuchs Hysterical Society, Noel Chiappa
[-- Attachment #1: Type: text/plain, Size: 2664 bytes --]
If I read the wikipedia entry for Whirlwind correctly (not a safe
assumption), it was tube based, and I think there was a tradeoff of speed,
as determined by power, and tube longevity. Given the purpose, early
warning of air attack, speed was vital, but so, too, was keeping it alive.
So a means of finding a "sweet spot" was really a matter of national
security. I can understand Forrester's pride in that context.
On Tue, Jun 19, 2018 at 8:58 AM, Clem Cole <clemc@ccc.com> wrote:
>
>
> On Tue, Jun 19, 2018 at 8:23 AM, Noel Chiappa <jnc@mercury.lcs.mit.edu>
> wrote:
>
>> > From: Doug McIlroy <doug@cs.dartmouth.edu>
>>
>> > Yet late in his life Forrester told me that the Whirlwind-connected
>> > invention he was most proud of was marginal testing
>>
>> Given the above, I'm totally gobsmacked to hear that. Margin testing was
>> important, yes, but not even remotely on the same quantum level as core.
>
> Wow -- I had exactly the same reaction. To me, core was the second
> most important invention (semiconductors switching being he first) for
> making computing practical. I was thinking that systems must have been
> really bad (worse than I knew) from a reliability stand point if he put
> marginal testing up there as more important than core.
>
> Like you, I thought core memory was pretty darned important. I never used
> a system that had Williams tubes, although we had one in storage so I knew
> what it looked like and knew how much more 'dense' core was compared to
> it. Which is pretty amazing still compare today. For the modern user,
> the IBM 360 a 1M core box (which we had 4) was made up of 4 19" relay
> racks, each was about 54" high and 24" deep. If you go to
> CMU Computer Photos from Chris Hausler
> <http://www.silogic.com/Athena/CMU%20Photos%20from%20Chris%20Hausler.html>
> and scroll down you can see some pictures of the old 360 (including a
> copy of me in them circa 75/76 in front of it) to gage the size).
>
>
>
> FWIW:
> I broke in with MECL which Motorola invented / developed for IBM for
> System 360 and it (and TTL) were the first logic families I learned with
> which to design. I remember the margin pots on the front of the 360 that
> we used when we were trying to find weak gates, which happened about ones
> every 10 days.
>
> The interesting part to me is that I'm suspect the PDP-10's and the Univac
> 1108 broke as often as the 360 did, but I have fewer memories of chasing
> problems with them. Probably because it was a less of an issue that was
> causing so many people to be disrupted by the 'down' time.
> ᐧ
>
[-- Attachment #2: Type: text/html, Size: 5642 bytes --]
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
2018-06-19 12:23 Noel Chiappa
2018-06-19 12:58 ` Clem Cole
@ 2018-06-21 14:09 ` Paul Winalski
1 sibling, 0 replies; 133+ messages in thread
From: Paul Winalski @ 2018-06-21 14:09 UTC (permalink / raw)
To: Noel Chiappa; +Cc: tuhs
On 6/19/18, Noel Chiappa <jnc@mercury.lcs.mit.edu> wrote:
>
> In other words, there's a reason they were all dropped like hot potatoes in
> favour of core - which, looked at from our DRAM-era perspective, seems
> quaintly dinosaurian. Individual pieces of hardware you can actually _see_
> with the naked eye, for _each_ bit? But that should give some idea of how
> much
> worse everything before it was, that it killed them all off so quickly!
When we replaced our S/360 model 25 with a S/370 model 125, I remember
being shocked when I found out that the semiconductor memory on the
125 was volatile. You mean we're going to have to reload the OS every
time we power the machine off? Yuck!!
-Paul W.
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
@ 2018-06-19 11:50 Doug McIlroy
0 siblings, 0 replies; 133+ messages in thread
From: Doug McIlroy @ 2018-06-19 11:50 UTC (permalink / raw)
To: tuhs
> As best I now recall, the concept was that instead of the namespace having a
root at the top, from which you had to allocate downward (and then recurse),
it built _upward_ - if two previously un-connected chunks of graph wanted to
unite in a single system, they allocated a new naming layer on top, in which
each existing system appeared as a constituent.
The Newcastle Connection (aka Unix United) implemented this idea.
Name spaces could be pasted together simply by making .. at the
roots point "up" to a new superdirectory. I do not remember whether
UIDS had to agree across the union (as in NFS) or were mapped (as
in RFS).
Doug
^ permalink raw reply [flat|nested] 133+ messages in thread
[parent not found: <20180618121738.98BD118C087@mercury.lcs.mit.edu>]
* Re: [TUHS] core
[not found] <20180618121738.98BD118C087@mercury.lcs.mit.edu>
@ 2018-06-18 21:13 ` Johnny Billquist
0 siblings, 0 replies; 133+ messages in thread
From: Johnny Billquist @ 2018-06-18 21:13 UTC (permalink / raw)
To: Noel Chiappa, ron, TUHS main list
On 2018-06-18 14:17, Noel Chiappa wrote:
> > The "separate" bus for the semiconductor memory is just a second Unibus
>
> Err, no. :-) There is a second UNIBUS, but... its source is a second port on
> the FASTBUS memory, the other port goes straight to the CPU. The other UNIBUS
> comes out of the CPU. It _is_ possible to join the two UNIBI together, but
> on machines which don't do that, the _only_ path from the CPU to the FASTBUS
> memory is via the FASTBUS.
Ah. You and Ron are right. I am confused.
So there were some previous PDP-11 models who did not have their memory
on the Unibus. The 11/45,50,55 accessed memory from the CPU not through
the Unibus, but through the fastbus, which was a pure memory bus, as far
as I understand. You (obviously) could also have memory on the Unibus,
but that would be slower then.
Ah, and there is a jumper to tell which addresses are served by the
fastbus, and the rest then go to the Unibus. Thanks, I had missed these
details before. (To be honest, I have never actually worked on any of
those machines.)
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt@softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
@ 2018-06-18 17:56 Noel Chiappa
2018-06-18 18:51 ` Clem Cole
0 siblings, 1 reply; 133+ messages in thread
From: Noel Chiappa @ 2018-06-18 17:56 UTC (permalink / raw)
To: tuhs; +Cc: jnc
> From: Clem Cole
> My experience is that more often than not, it's less a failure to see
> what a successful future might bring, and often one of well '*we don't
> need to do that now/costs too much/we don't have the time*.'
Right, which is why I later said a "successful architect has to pay _very_
close attention to both the 'here and now' (it has to be viable for
contemporary use, on contemporary hardware, with contemporary resources)".
They need to be sensitive to the (real and valid) concerns of people about who
are looking at today.
By the same token, though, the people you mention need to be sensitive to the
long-term picture. Too often they just blow it off, and focus _solely_ on
today. (I had a really bad experience with someone like that just before I
retired.) They need to understand, and accept, that that's just as serious an
error as an architect who doesn't care about 'workable today'.
In retrospect, I'm not sure you can fix people who are like that. I think the
only solution is to find an architect who _does_ respect the 'here and now',
and put them in control; that's the only way. IBM kinda-sorta did this with
the /360, and I think it showed/shows.
> I can imagine that he did not want to spend on anything that he thought
> was wasteful.
Understandable. But see above...
The art is in finding a path that leave the future open (i.e. reduces future
costs, when you 'hit the wall'), without running up costs now.
A great example is the QBUS 22-bit expansion - and I don't know if this was
thought out beforehand, or if they just lucked out. (Given that the expanded
address pins were not specifically reserved for that, probably the latter.
Sigh - even with the experience of the UNIBUS, they didn't learn!)
Anyway.. lots of Q18 devices (well, not DMA devices) work fine on Q22 because
of the BBS7 signal, which indicates an I/O device register is being looked
at. Without that, Q18 devices would have either i) had to incur the cost now
of more bus address line transceivers, or ii) stopped working when the bus was
upgraded to 22 address lines.
They managed to have their cake (fairly minimal costs now) and eat it too
(later expansion).
> Just like I retold the Amdahl/Brooks story of the 8-bit byte and Amdahl
> thinking Brooks was nuts
Don't think I've heard that one?
>> the decision to remove the variable-length addresses from IPv3 and
>> substitute the 32-bit addresses of IPv4.
> I always wondered about the back story on that one.
My understanding is that the complexity of variable-length address support
(which impacted TCP, as well as IP) was impacting the speed/schedule for
getting stuff done. Remember, it was a very small effort, code-writing
resources were limited, etc.
(I heard, at the time, from someone who was there, that one implementer was
overheard complaining to Vint about the number of pointer registers available
at interrupt time in a certain operating system. I don't think it was _just_
that, but rather the larger picture of the overall complexity cost.)
> 32-bits seemed infinite in those days and no body expected the network
> to scale to the size it is today and will grow to in the future
Yes, but like I said: they failed to ask themselves 'what are things going to
look like in 10 years if this thing is a success'? Heck, it didn't even last
10 years before they had to start kludging (adding A/B/C addresses)!
And ARP, well done as it is (its ability to handle just about any combo of
protocol and hardware addresses is because DCP and I saw eye-to-eye about
generality), is still a kludge. (Yes, yes, I know it's another binding layer,
and in some ways, another binding layer is never a bad thing, but...) The IP
architectural concept was to carry local hardware addresses in the low part of
the IP address. Once Ethernet came out, that was toast.
>> So, is poor vision common? All too common.
> But to be fair, you can also end up with being like DEC and often late
> to the market.
Gotta do a perfect job of balance on that knife edge - like an Olympic gymnast
on the beam...
This is particularly true with comm system architecture, which has about the
longest lifetime of _any_ system. If someone comes up with a new editor or OS
paradigm, people can convert to it if they want. But converting to a new
communication system - if you convert, you cut yourself off. A new one has to
be a _huge_ improvement over the older gear (as TCP/IP was) before conversion
makes sense.
So networking architects have to pay particularly strong attention to the long
term - or should, if they are to be any good.
> I think in both cases would have been allowed Alpha to be better
> accepted if DEC had shipped earlier with a few hacks, but them improved
> Tru64 as a better version was developed (*i.e.* replace the memory
> system, the I/O system, the TTY handler, the FS just to name a few that
> got rewritten from OSF/1 because folks thought they were 'weak').
But you can lose with that strategy too.
Multics had a lot of sub-systems re-written from the ground up over time, and
the new ones were always better (faster, more efficient) - a common even when
you have the experience/knowledge of the first pass.
Unfortunately, by that time it had the reputation as 'horribly slow and
inefficient', and in a lot of ways, never kicked that:
http://www.multicians.org/myths.html
Sigh, sometimes you can't win!
Noel
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
2018-06-18 17:56 Noel Chiappa
@ 2018-06-18 18:51 ` Clem Cole
0 siblings, 0 replies; 133+ messages in thread
From: Clem Cole @ 2018-06-18 18:51 UTC (permalink / raw)
To: Noel Chiappa; +Cc: The Eunuchs Hysterical Society
[-- Attachment #1: Type: text/plain, Size: 5390 bytes --]
On Mon, Jun 18, 2018 at 1:56 PM, Noel Chiappa <jnc@mercury.lcs.mit.edu>
wrote:
>
> > Just like I retold the Amdahl/Brooks story of the 8-bit byte and
> Amdahl
> > thinking Brooks was nuts
>
> Don't think I've heard that one?
Apologies for the repeat, if I have set this to TUHS before (I know I have
mentioned it in other places). Somebody else on this list mentioned David
Brailsford YouTube Video. Which has more details, biut he had some of not
quite right. I had watched it an we were communicating. The actual story
behind the byte is a bit deeper than he describes in the lecture, which he
thanked me - as I have since introduced him to my old friend & colleague
Russ Robelen who was was the chief designer of the Model 50 and later lead
the ASC system with John Coche working for him. Brailsford is right about
results of byte addressing and I did think his lecture is excellent and you
can learn a great deal. That said, Russ tells the details of story like
this:
Gene Amdahl wanted the byte to be 6 bits and felt that 8 bits was
'wasteful' of his hardware. Amdahl also did not see why more than 24 bits
for a word was really needed and most computations used words of course.
4, 6-bit bytes in a word seemed satisfactory to Gene. Fred Brooks kept
kicking Amdahl out of his office and told him flatly - that until he came
back with things that were power's of 2, don't bother - we couldn't program
it. The 32-bit word was a compromise, but note the original address was
only 24-bits (3, 8-bit bytes), although Brooks made sure all address
functions stored all 32-bits - which as Gordon Bell pointed out later was
the #1 thing that saved System 360 and made it last.
BTW:
Russ and
Ed Sussenguth invented speculative execution for the ACS system
a couple of years later
. I was
harassing him
last January,
because for 40 years we have been using his cool idea and it
came back to bite us at Intel. Here is the message cut/pasted from his
email for context:
"I see you are still leading a team at Intel
developing super computers and associated technologies. It certainly
is exciting times in high speed computing.
It brings back memories of my last work at IBM 50 years ago on the ACS
project. You know you are old when you publish in the IEEE Annals of
the History of Computing. One of the co-authors, Ed Sussenguth, passed
away before our paper was published
in
2016.
https://www.computer.og/csdl/mags/an/2016/01/man2016010060.html
<https://www.computer.org/csdl/mags/an/2016/01/man2016010060.html>
Some
of the work we did way back then has made the news in an unusual way
with the recent revelations on Spectre and Meltdown. I read the
‘Spectre Attacks: Exploiting Speculative Execution’ paper yesterday
trying to understand how speculative execution was being exploited.
At
ACS we were the first group at IBM to come up with the notion of the
Branch Table and other techniques for speeding up execution.
I wish you were closer. I’d do love to hear your views on the state of
computing today. I have a framed micrograph of the IBM Power 8 chip on
the wall in my office. In many ways the Power Series is an outgrowth
of ACS.
I still try to keep up with what is happening in my old field. The
recent advances by Google in Deep Learning are breathtaking to me.
Advances like AlphaGo Zero I never expected to see in my lifetime.
"
>
> But you can lose with that strategy too.
>
> Multics had a lot of sub-systems re-written from the ground up over time,
> and
> the new ones were always better (faster, more efficient) - a common even
> when
> you have the experience/knowledge of the first pass.
>
> Unfortunately, by that time it had the reputation as 'horribly slow and
> inefficient', and in a lot of ways, never kicked that:
>
> http://www.multicians.org/myths.html
>
> Sigh, sometimes you can't win!
Yep - although I think that may have been a case of economics. Multics
for all its great ideas, was just a huge system, when Moores law started to
make smaller systems possible. So I think your comment about thinking
about what you need now and what you will need in the future was part of
the issue.
I look at Multics vs Unix the same way I look at TNC vs Beowulf clusters.
At the time, we did TNC, we worked really hard to nail the transparency
thing and we did. It was (is) awesome. But it cost. Tru64 (VMS and
other SSI) style clusters are not around today. The hack that is Beowulf
is what lived on. The key is that it was good enough and for most people,
that extra work we did to get rid of those seams just was not worth it.
And in the because Beowulf was economically successful, things were
implemented for it, that were never even considered for Tru64 and the SSI
style systems. To me, Multics and Unix have the same history.
Multics was (is) cool; but Unix is similar; but different and too the
lead. The key is that it was not a straight path. Once Unix took over,
history went in a different direction.
Clem
[-- Attachment #2: Type: text/html, Size: 17304 bytes --]
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
@ 2018-06-18 14:51 Noel Chiappa
2018-06-18 14:58 ` Warner Losh
0 siblings, 1 reply; 133+ messages in thread
From: Noel Chiappa @ 2018-06-18 14:51 UTC (permalink / raw)
To: tuhs; +Cc: jnc
> From: Tony Finch <dot@dotat.at>
> Was this written down anywhere?
Alas, no. It was a presentation at a group seminar, and used either hand-drawn
transparencies, or a white-board - don't recall exactly which. I later tried to
dig it up for use in Nimrod, but without success.
As best I now recall, the concept was that instead of the namespace having a
root at the top, from which you had to allocate downward (and then recurse),
it built _upward_ - if two previously un-connected chunks of graph wanted to
unite in a single system, they allocated a new naming layer on top, in which
each existing system appeared as a constituent.
Or something like that! :-)
The issue with 'top-down' is that you have to have some global 'authority' to
manage the top level - hand out chunks, etc, etc. (For a spectacular example
of where this can go, look at NSAP's.) And what do you do when you run out of
top-level space? (Although in the NSAP case, they had such a complex top
couple of layers, they probably would have avoided that issue. Instead, they
had the problem that their name-space was spectacularly ill-suited to path
selection [routing], since in very large networks, interface names
[adddresses] must have a topological aspect if the path selection is to
scale. Although looking at the Internet nowadays, perhaps not!)
'Bottom-up' is not without problems of course (e.g. what if you want to add
another layer, e.g. to support potentially-nested virtual machines).
I'm not sure how well Dave understood the issue of path selection scaling at
the time he proposed it - it was very early on, '78 or so - since we didn't
understand path selection then as well as we do now. IIRC, I think he was
mostly was interested in it as a way to avoid having to have an asssignment
authority. The attraction for me was that it was easier to ensure that the
names had the needed topological aspect.
Noel
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
2018-06-18 14:51 Noel Chiappa
@ 2018-06-18 14:58 ` Warner Losh
0 siblings, 0 replies; 133+ messages in thread
From: Warner Losh @ 2018-06-18 14:58 UTC (permalink / raw)
To: Noel Chiappa; +Cc: The Eunuchs Hysterical Society
[-- Attachment #1: Type: text/plain, Size: 973 bytes --]
On Mon, Jun 18, 2018 at 8:51 AM, Noel Chiappa <jnc@mercury.lcs.mit.edu>
wrote:
> I'm not sure how well Dave understood the issue of path selection scaling
> at
> the time he proposed it - it was very early on, '78 or so - since we didn't
> understand path selection then as well as we do now. IIRC, I think he was
> mostly was interested in it as a way to avoid having to have an asssignment
> authority. The attraction for me was that it was easier to ensure that the
> names had the needed topological aspect.
I always thought the domain names were backwards, but that's because I
wanted command completion to work on them. I have not reevaluated this
view, though, since the very early days of the internet and I suppose
completion would be less useful than I'd originally supposed because the
cost to get the list is high or administratively blocked (tell me all the
domains that start with, in my notation, "com.home" when I hit <tab> in a
fast, easy way).
Warner
[-- Attachment #2: Type: text/html, Size: 1372 bytes --]
^ permalink raw reply [flat|nested] 133+ messages in thread
[parent not found: <mailman.1.1529287201.13697.tuhs@minnie.tuhs.org>]
* Re: [TUHS] core
[not found] <mailman.1.1529287201.13697.tuhs@minnie.tuhs.org>
@ 2018-06-18 5:58 ` Johnny Billquist
2018-06-18 12:39 ` Ronald Natalie
0 siblings, 1 reply; 133+ messages in thread
From: Johnny Billquist @ 2018-06-18 5:58 UTC (permalink / raw)
To: tuhs
On 2018-06-18 04:00, Ronald Natalie <ron@ronnatalie.com> wrote:
>> Well, it is an easy observable fact that before the PDP-11/70, all PDP-11 models had their memory on the Unibus. So it was not only "an ability at the lower end", b
> That’s not quite true. While the 18 bit addressed machines all had their memory directly accessible from the Unibus, the 45/50/55 had a separate bus for the (then new) semiconductor (bipolar or MOS) memory.
Eh... Yes... But...
The "separate" bus for the semiconductor memory is just a second Unibus,
so the statement is still true. All (earlier) PDP-11 models had their
memory on the Unibus. Including the 11/45,50,55.
It's just that those models have two Unibuses.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt@softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
2018-06-18 5:58 ` Johnny Billquist
@ 2018-06-18 12:39 ` Ronald Natalie
0 siblings, 0 replies; 133+ messages in thread
From: Ronald Natalie @ 2018-06-18 12:39 UTC (permalink / raw)
To: Johnny Billquist; +Cc: tuhs
> On Jun 18, 2018, at 1:58 AM, Johnny Billquist <bqt@update.uu.se> wrote:
>
> On 2018-06-18 04:00, Ronald Natalie <ron@ronnatalie.com> wrote:
>>> Well, it is an easy observable fact that before the PDP-11/70, all PDP-11 models had their memory on the Unibus. So it was not only "an ability at the lower end", b
>> That’s not quite true. While the 18 bit addressed machines all had their memory directly accessible from the Unibus, the 45/50/55 had a separate bus for the (then new) semiconductor (bipolar or MOS) memory.
>
> Eh... Yes... But...
> The "separate" bus for the semiconductor memory is just a second Unibus, so the statement is still true. All (earlier) PDP-11 models had their memory on the Unibus. Including the 11/45,50,55.
> It's just that those models have two Unibuses.
No, you are confusing different things I think. The fastbus (where the memory was) is a distinct bus. The fastbus was dual ported with the CPU in one port and a second UNIBUS could be connected to the other port.
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
@ 2018-06-17 21:18 Noel Chiappa
0 siblings, 0 replies; 133+ messages in thread
From: Noel Chiappa @ 2018-06-17 21:18 UTC (permalink / raw)
To: tuhs; +Cc: jnc
> From: Derek Fawcus <dfawcus+lists-tuhs@employees.org>
> my scan of it suggests that only the host part of the address which were
> extensible
Well, the division into 'net' and 'rest' does not appear to have been a hard
one at that point, as it later became.
> The other thing obviously missing in the IEN 28 version is the TTL
Yes... interesting!
> it has the DF flag in the TOS field, and an OP bit in the flags field
Yeah, small stuff like that got added/moved/removed around a lot.
> the CIDR vs A/B/C stuff didn't really change the rest.
It made packet processing in routers quite different; routing lookups, the
routing table, etc became much more complex (I remember that change)! Also in
hosts, which had not yet had their understanding of fields in the addresses
lobotomized away (RFC-1122, Section 3.3.1).
Yes, the impact on code _elsewhere_ in the stack was minimal, because the
overall packet format didn't change, and addresses were still 32 bits, but...
> The other bit I find amusing are the various movements of the port
> numbers
Yeah, there was a lot of discussion about whether they were properly part of
the internetwork layer, or the transport. I'm not sure there's really a 'right'
answer; PUP:
http://gunkies.org/wiki/PARC_Universal_Packet
made them part of the internetwork header, and seemed to do OK.
I think we eventually decided that we didn't want to mandate a particular port
name size across all transports, and moved it out. This had the down-side that
there are some times when you _do_ want to have the port available to an
IP-only device, which is why ICMP messages return the first N bytes of the
data _after_ the IP header (since it's not clear where the port field[s] will
be).
But I found, working with PUP, there were some times when the defined ports
didn't always make sense with some protocols (although PUP didn't really have
a 'protocol' field per se); the interaction of 'PUP type' and 'socket' could
sometimes be confusing/problemtic. So I personally felt that was at least as
good a reason to move them out. 'Ports' make no sense for routing protocols,
etc.
Overall, I think in the end, TCP/IP got that all right - the semantics of the
'protocol' field are clear and simple, and ports in the transport layer have
worked well; I can't think of any places (other than routers which want to
play games with connections) where not having ports in the internetwork layer
has been an issue.
Noel
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
@ 2018-06-17 17:58 Noel Chiappa
2018-06-18 9:16 ` Tim Bradshaw
0 siblings, 1 reply; 133+ messages in thread
From: Noel Chiappa @ 2018-06-17 17:58 UTC (permalink / raw)
To: tuhs; +Cc: jnc
> From: "Theodore Y. Ts'o"
> To be fair, it's really easy to be wise to after the fact.
Right, which is why I added the caveat "seen to be such _at the time_ by some
people - who were not listened to".
> failed protocols and designs that collapsed of their own weight because
> architects added too much "maybe it will be useful in the future"
And there are also designs which failed because their designers were too
un-ambitious! Converting to a new system has a cost, and if the _benefits_
(which more or less has to mean new capabilities) of the new thing don't
outweigh the costs of conversion, it too will be a failure.
> Sometimes having architects being successful to add their "vision" to a
> product can be worst thing that ever happened
A successful architect has to pay _very_ close attention to both the 'here and
now' (it has to be viable for contemporary use, on contemporary hardware, with
contemporary resources), and also the future (it has to have 'room to grow').
It's a fine edge to balance on - but for an architecture to be a great
success, it _has_ to be done.
> The problem is it's hard to figure out in advance which is poor vision
> versus brilliant engineering to cut down the design so that it is "as
> simple as possible", but nevertheless, "as complex as necessary".
Absolutely. But it can be done. Let's look (as an example) at that IPv3->IPv4
addressing decision.
One of two things was going to be true of the 'Internet' (that name didn't
exist then, but it's a convenient tag): i) It was going to be a failure (in
which case, it probably didn't matter what was done, or ii) it was going to be
a success, in which case that 32-bit field was clearly going to be a crippling
problem.
With that in hand, there was no excuse for that decision.
I understand why they ripped out variable-length addresses (I was just about
to start writing router code, and I know how hard it would have been), but in
light of the analysis immediately above, there was no excuse for looking
_only_ at the here and now, and not _also_ looking to the future.
Noel
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
2018-06-17 17:58 Noel Chiappa
@ 2018-06-18 9:16 ` Tim Bradshaw
2018-06-18 15:33 ` Tim Bradshaw
0 siblings, 1 reply; 133+ messages in thread
From: Tim Bradshaw @ 2018-06-18 9:16 UTC (permalink / raw)
To: Noel Chiappa; +Cc: tuhs
[-- Attachment #1: Type: text/plain, Size: 773 bytes --]
On 17 Jun 2018, at 18:58, Noel Chiappa <jnc@mercury.lcs.mit.edu> wrote:
>
> One of two things was going to be true of the 'Internet' (that name didn't
> exist then, but it's a convenient tag): i) It was going to be a failure (in
> which case, it probably didn't matter what was done, or ii) it was going to be
> a success, in which case that 32-bit field was clearly going to be a crippling
> problem.
There's a third possibility: if implementation of the specification is too complicated it will fail, while if the implementation is simple enough it may succeed, in which case the specification which allowed the simple implementation will eventually become a problem.
I don't know whether IP falls into that category, but I think a lot of things do.
--tim
[-- Attachment #2: Type: text/html, Size: 4128 bytes --]
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
2018-06-18 9:16 ` Tim Bradshaw
@ 2018-06-18 15:33 ` Tim Bradshaw
0 siblings, 0 replies; 133+ messages in thread
From: Tim Bradshaw @ 2018-06-18 15:33 UTC (permalink / raw)
To: Noel Chiappa; +Cc: tuhs
> On 18 Jun 2018, at 15:33, Noel Chiappa <jnc@mercury.lcs.mit.edu> wrote:
>
> When you say "if implementation of the specification is too complicated it
> will fail", I think you mean 'the specification is so complex that _any_
> implementation must _necessarily_ be so complex that all the implementations,
> and thus the specification itself, will fail'?
Yes, except that I don't think the specification itself has to be complex, it just needs to require implementations be hard, computationally expensive, or both. For instance a specification for integer arithmetic in a programming language which required that (a/b)*b be a except where b is zero isn't complicated, I think, but it requires implementations which are either hard, computationally-expensive or both.
>
> And for "in which case the specification which allowed the simple
> implementation will eventually become a problem", I think that's part of your
> saying 'you get to pick your poison; either the spec is so complicated it
> fails right away, or if it's simple enough to succeed in the short term, it
> will neccessarily fail in the long term'?
Yes, with the above caveat about the spec not needing to be complicated, and I'd say something like 'become limiting in the long term' rather than 'fail in the long term'.
So I think your idea was that, if there's a choice between a do-the-right-thing, future-proof (variable-length-address field) solution or a will-work-for-now, implementationally-simpler (fixed-length) solution, then you should do the right thing, because if the thing fails it makes no odds and if it succeeds then you will regret the second solution. My caveat is that success (in the sense of wide adoption) is not independent of which solution you pick, and in particular that will-work-for-now solutions are much more likely to succeed than do-the-right-thing ones.
As I said, I don't know whether this applies to IP.
--tim
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
@ 2018-06-17 14:36 Noel Chiappa
2018-06-17 15:58 ` Derek Fawcus
0 siblings, 1 reply; 133+ messages in thread
From: Noel Chiappa @ 2018-06-17 14:36 UTC (permalink / raw)
To: tuhs; +Cc: jnc
> From: Derek Fawcus
> Are you able to point to any document which still describes that
> variable length scheme? I see that IEN 28 defines a variable length
> scheme (using version 2)
That's the one; Version 2 of IP, but it was for Version 3 of TCP (described
here: IEN-21, Cerf, "TCP 3 Specification", Jan-78 ).
> and that IEN 41 defines a different variable length scheme, but is
> proposing to use version 4.
Right, that's a draft only (no code ever written for it), from just before the
meeting that substituted 32-bit addresses.
> (IEN 44 looks a lot like the current IPv4).
Because it _is_ the current IPv4 (well, modulo the class A/B/C addressing
stuff). :-)
Noel
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
2018-06-17 14:36 Noel Chiappa
@ 2018-06-17 15:58 ` Derek Fawcus
0 siblings, 0 replies; 133+ messages in thread
From: Derek Fawcus @ 2018-06-17 15:58 UTC (permalink / raw)
To: Noel Chiappa; +Cc: tuhs
On Sun, Jun 17, 2018 at 10:36:10AM -0400, Noel Chiappa wrote:
> > From: Derek Fawcus
>
> > Are you able to point to any document which still describes that
> > variable length scheme? I see that IEN 28 defines a variable length
> > scheme (using version 2)
>
> That's the one; Version 2 of IP, but it was for Version 3 of TCP (described
> here: IEN-21, Cerf, "TCP 3 Specification", Jan-78 ).
Ah - thanks.
So my scan of it suggests that only the host part of the address which were
extensible, but then I guess the CIDR scheme could have eventually been applied
to that portion.
The other thing obviously missing in the IEN 28 version is the TTL (which
appeared by IEN 41).
> > and that IEN 41 defines a different variable length scheme, but is
> > proposing to use version 4.
>
> Right, that's a draft only (no code ever written for it), from just before the
> meeting that substituted 32-bit addresses.
>
> > (IEN 44 looks a lot like the current IPv4).
>
> Because it _is_ the current IPv4 (well, modulo the class A/B/C addressing
> stuff). :-)
I wrote 'a lot', because it has the DF flag in the TOS field, and an OP bit
in the flags field; the CIDR vs A/B/C stuff didn't really change the rest.
But yeah - essentially what we still use now. Now 40 years and still going.
The other bit I find amusing are the various movements of the port numbers,
obviously they were originally part of the combined header (e.g. IEN 26),
then the IEN 21 TCP has them in the middle of its header, by IEN 44 it is
sort of its own header split from the TCP header which now omits ports.
Eventually they end up in the current location, as part of the start of the
TCP header (and UDP, etc), essentially combining that 'Port Header' with
whatever transport follows.
DF
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
@ 2018-06-16 22:57 Noel Chiappa
0 siblings, 0 replies; 133+ messages in thread
From: Noel Chiappa @ 2018-06-16 22:57 UTC (permalink / raw)
To: tuhs; +Cc: jnc
> From: Johnny Billquist
> incidentally have 18 data bits, but that is mostly ignored by all
> systems. I believe the KS-10 made use of that, though. And maybe the
> PDP-15.
The 18-bit data thing is a total kludge; they recycled the two bus parity
lines as data lines.
The first device that I know of that used it is the RK11-E:
http://gunkies.org/wiki/RK11_disk_controller#RK11-E
which is the same cards as the RK11-D, with a jumper set for 18-bit operation,
and a different clock crystal. The other UNIBUS interface that could do this
was the RH11 MASSBUS controller. Both were originally done for the PDP-15;
they were used with the UC15 Unichannel.
The KS10:
http://gunkies.org/wiki/KS10
wound up using the 18-bit RH11 hack, but that was many years later.
Noel
^ permalink raw reply [flat|nested] 133+ messages in thread
[parent not found: <mailman.1.1529175600.3826.tuhs@minnie.tuhs.org>]
* Re: [TUHS] core
[not found] <mailman.1.1529175600.3826.tuhs@minnie.tuhs.org>
@ 2018-06-16 22:14 ` Johnny Billquist
2018-06-16 22:38 ` Arthur Krewat
2018-06-17 0:15 ` Clem cole
0 siblings, 2 replies; 133+ messages in thread
From: Johnny Billquist @ 2018-06-16 22:14 UTC (permalink / raw)
To: tuhs
On 2018-06-16 21:00, Clem Cole <clemc@ccc.com> wrote:
> below... > On Sat, Jun 16, 2018 at 9:37 AM, Noel Chiappa
<jnc@mercury.lcs.mit.edu> wrote:
>>
>> Let's start with the UNIBUS. Why does it have only 18 address lines? (I
>> have
>> this vague memory of a quote from Gordon Bell admitting that was a mistake,
>> but I don't recall exactly where I saw it.)
> I think it was part of the same paper where he made the observation that
> the greatest mistake an architecture can have is too few address bits.
I think the paper you both are referring to is the "What have we learned
from the PDP-11", by Gordon Bell and Bill Strecker in 1977.
https://gordonbell.azurewebsites.net/Digital/Bell_Strecker_What_we%20_learned_fm_PDP-11c%207511.pdf
There is some additional comments in
https://gordonbell.azurewebsites.net/Digital/Bell_Retrospective_PDP11_paper_c1998.htm
> My understanding is that the problem was that UNIBUS was perceived as an
> I/O bus and as I was pointing out, the folks creating it/running the team
> did not value it, so in the name of 'cost', more bits was not considered
> important.
Hmm. I'm not aware of anyone perceiving the Unibus as an I/O bus. It was
very clearly designed a the system bus for all needs by DEC, and was
used just like that until the 11/70, which introduced a separate memory
bus. In all previous PDP-11s, both memory and peripherals were connected
on the Unibus.
Why it only have 18 bits, I don't know. It might have been a reflection
back on that most things at DEC was either 12 or 18 bits at the time,
and 12 was obviously not going to cut it. But that is pure speculation
on my part.
But, if you read that paper again (the one from Bell), you'll see that
he was pretty much a source for the Unibus as well, and the whole idea
of having it for both memory and peripherals. But that do not tell us
anything about why it got 18 bits. It also, incidentally have 18 data
bits, but that is mostly ignored by all systems. I believe the KS-10
made use of that, though. And maybe the PDP-15. And I suspect the same
would be true for the address bits. But neither system was probably
involved when the Unibus was created, but made fortuitous use of it when
they were designed.
> I used to know and work with the late Henk Schalke, who ran Unibus (HW)
> engineering at DEC for many years. Henk was notoriously frugal (we might
> even say 'cheap'), so I can imagine that he did not want to spend on
> anything that he thought was wasteful. Just like I retold the
> Amdahl/Brooks story of the 8-bit byte and Amdahl thinking Brooks was nuts;
> I don't know for sure, but I can see that without someone really arguing
> with Henk as to why 18 bits was not 'good enough.' I can imagine the
> conversation going something like: Someone like me saying: *"Henk, 18 bits
> is not going to cut it."* He might have replied something like: *"Bool
> sheet *[a dutchman's way of cursing in English], *we already gave you two
> more bit than you can address* (actually he'd then probably stop mid
> sentence and translate in his head from Dutch to English - which was always
> interesting when you argued with him).
Quite possible. :-)
> Note: I'm not blaming Henk, just stating that his thinking was very much
> that way, and I suspect he was not not alone. Only someone like Gordon and
> the time could have overruled it, and I don't think the problems were
> foreseen as Noel notes.
Bell in retrospect thinks that they should have realized this problem,
but it would appear they really did not consider it at the time. Or
maybe just didn't believe in what they predicted.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt@softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
2018-06-16 22:14 ` Johnny Billquist
@ 2018-06-16 22:38 ` Arthur Krewat
2018-06-17 0:15 ` Clem cole
1 sibling, 0 replies; 133+ messages in thread
From: Arthur Krewat @ 2018-06-16 22:38 UTC (permalink / raw)
To: tuhs
On 6/16/2018 6:14 PM, Johnny Billquist wrote:
> I believe the KS-10 made use of that, though.
Absolutely.
The 36-bit transfer mode requires one memory cycle
(rather than two) for each word transferred. In 36-bit
mode, the PDP-11 word count must be even (an odd word count
would hang the UBA) . Only one device on each UBA can do
36-bit transfers; this is because the UBA has only one
buffer to hold the left 18 bits, while the right 18 bits
come across the UNIBUS. On the 2020 system, the disk is the
only device using the 36-bit transfer mode.
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
2018-06-16 22:14 ` Johnny Billquist
2018-06-16 22:38 ` Arthur Krewat
@ 2018-06-17 0:15 ` Clem cole
2018-06-17 1:50 ` Johnny Billquist
1 sibling, 1 reply; 133+ messages in thread
From: Clem cole @ 2018-06-17 0:15 UTC (permalink / raw)
To: Johnny Billquist; +Cc: tuhs
Hmm. I think you are trying to put to fine a point on it. Having had this conversation with a number of folks who were there, you’re right that the ability to memory on the Unibus at the lower end was clearly there but I’ll take Dave Cane and Henk’s word for it as calling it an IO bus who were the primary HW folks behind a lot of it. Btw it’s replacement, Dave’s BI, was clearly designed with IO as the primary point. It was supposed to be open sourced in today’s parlance but DEC closed it at the end. The whole reason was to have an io bus the 3rd parties could build io boards around. Plus, By then DEC started doing split transactions on the memory bus ala the SMI which was supposed to be private. After the BI mess, And then By the time of Alpha BI morphed into PCI which was made open and of course is now Intel’s PCIe. But that said, even today we need to make large memory windows available thru it for things like messaging and GPUs - so the differences can get really squishy. OPA2 has a full MMU and can do a lot of cache protocols just because the message HW really looks to memory a whole lot like a CPU core. And I expect future GPUs to work the same way.
And at this point (on die) the differences between a memory and io bus are often driven by power and die size. The Memory bus folks are often willing to pay more to keep the memory heiarchary a bit more sane. IO busses will often let the SW deal with consistency in return for massive scale.
Sent from my PDP-7 Running UNIX V0 expect things to be almost but not quite.
> On Jun 16, 2018, at 6:14 PM, Johnny Billquist <bqt@update.uu.se> wrote:
>
>> On 2018-06-16 21:00, Clem Cole <clemc@ccc.com> wrote:
>> below... > On Sat, Jun 16, 2018 at 9:37 AM, Noel Chiappa
> <jnc@mercury.lcs.mit.edu> wrote:
>>>
>>> Let's start with the UNIBUS. Why does it have only 18 address lines? (I
>>> have
>>> this vague memory of a quote from Gordon Bell admitting that was a mistake,
>>> but I don't recall exactly where I saw it.)
>> I think it was part of the same paper where he made the observation that
>> the greatest mistake an architecture can have is too few address bits.
>
> I think the paper you both are referring to is the "What have we learned from the PDP-11", by Gordon Bell and Bill Strecker in 1977.
>
> https://gordonbell.azurewebsites.net/Digital/Bell_Strecker_What_we%20_learned_fm_PDP-11c%207511.pdf
>
> There is some additional comments in https://gordonbell.azurewebsites.net/Digital/Bell_Retrospective_PDP11_paper_c1998.htm
>
>> My understanding is that the problem was that UNIBUS was perceived as an
>> I/O bus and as I was pointing out, the folks creating it/running the team
>> did not value it, so in the name of 'cost', more bits was not considered
>> important.
>
> Hmm. I'm not aware of anyone perceiving the Unibus as an I/O bus. It was very clearly designed a the system bus for all needs by DEC, and was used just like that until the 11/70, which introduced a separate memory bus. In all previous PDP-11s, both memory and peripherals were connected on the Unibus.
>
> Why it only have 18 bits, I don't know. It might have been a reflection back on that most things at DEC was either 12 or 18 bits at the time, and 12 was obviously not going to cut it. But that is pure speculation on my part.
>
> But, if you read that paper again (the one from Bell), you'll see that he was pretty much a source for the Unibus as well, and the whole idea of having it for both memory and peripherals. But that do not tell us anything about why it got 18 bits. It also, incidentally have 18 data bits, but that is mostly ignored by all systems. I believe the KS-10 made use of that, though. And maybe the PDP-15. And I suspect the same would be true for the address bits. But neither system was probably involved when the Unibus was created, but made fortuitous use of it when they were designed.
>
>> I used to know and work with the late Henk Schalke, who ran Unibus (HW)
>> engineering at DEC for many years. Henk was notoriously frugal (we might
>> even say 'cheap'), so I can imagine that he did not want to spend on
>> anything that he thought was wasteful. Just like I retold the
>> Amdahl/Brooks story of the 8-bit byte and Amdahl thinking Brooks was nuts;
>> I don't know for sure, but I can see that without someone really arguing
>> with Henk as to why 18 bits was not 'good enough.' I can imagine the
>> conversation going something like: Someone like me saying: *"Henk, 18 bits
>> is not going to cut it."* He might have replied something like: *"Bool
>> sheet *[a dutchman's way of cursing in English], *we already gave you two
>> more bit than you can address* (actually he'd then probably stop mid
>> sentence and translate in his head from Dutch to English - which was always
>> interesting when you argued with him).
>
> Quite possible. :-)
>
>> Note: I'm not blaming Henk, just stating that his thinking was very much
>> that way, and I suspect he was not not alone. Only someone like Gordon and
>> the time could have overruled it, and I don't think the problems were
>> foreseen as Noel notes.
>
> Bell in retrospect thinks that they should have realized this problem, but it would appear they really did not consider it at the time. Or maybe just didn't believe in what they predicted.
>
> Johnny
>
> --
> Johnny Billquist || "I'm on a bus
> || on a psychedelic trip
> email: bqt@softjar.se || Reading murder books
> pdp is alive! || tryin' to stay hip" - B. Idol
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
2018-06-17 0:15 ` Clem cole
@ 2018-06-17 1:50 ` Johnny Billquist
2018-06-17 10:33 ` Ronald Natalie
2018-06-20 16:55 ` Paul Winalski
0 siblings, 2 replies; 133+ messages in thread
From: Johnny Billquist @ 2018-06-17 1:50 UTC (permalink / raw)
To: Clem cole; +Cc: tuhs
On 2018-06-17 02:15, Clem cole wrote:
> Hmm. I think you are trying to put to fine a point on it. Having had this conversation with a number of folks who were there, you’re right that the ability to memory on the Unibus at the lower end was clearly there but I’ll take Dave Cane and Henk’s word for it as calling it an IO bus who were the primary HW folks behind a lot of it. Btw it’s replacement, Dave’s BI, was clearly designed with IO as the primary point. It was supposed to be open sourced in today’s parlance but DEC closed it at the end. The whole reason was to have an io bus the 3rd parties could build io boards around. Plus, By then DEC started doing split transactions on the memory bus ala the SMI which was supposed to be private. After the BI mess, And then By the time of Alpha BI morphed into PCI which was made open and of course is now Intel’s PCIe. But that said, even today we need to make large memory windows available thru it for things like messaging and GPUs - so the differences can get really squishy. OPA2 has a full MMU and can do a lot of cache protocols just because the message HW really looks to memory a whole lot like a CPU core. And I expect future GPUs to work the same way.
Well, it is an easy observable fact that before the PDP-11/70, all
PDP-11 models had their memory on the Unibus. So it was not only "an
ability at the lower end", but the only option for a number of years.
The need to grow beyond 18 bit addresses, as well as speed demands,
drove the PDP-11/70 to abandon the Unibus for memory. And this was then
also the case for the PDP-11/44, PDP-11/24, PDP-11/84 and PDP-11/94,
which also had 22-bit addressing of memory, which then followed the
PDP-11/70 in having memory separated from the Unibus. All other Unibus
machines had the memory on the Unibus.
After Unibus (if we skip Q-bus) you had SBI, which was also used both
for controllers (well, mostly bus adaptors) and memory, for the
VAX-11/780. However, evaluation of where the bottleneck was on that
machine led to the memory being moved away from SBI for the VAX-86x0
machines. SBI never was used much for any direct controllers, but
instead you had Unibus adapters, so here the Unibus was used as a pure
I/O bus.
And BI came after that. (I'm ignoring the CMI of the VAX-11/750 and
others here...)
By the time of the VAX, the Unibus was clearly only an I/O bus (for VAX
and high end PDP-11s). No VAX ever had memory on the Unibus. But the
Unibus was not design with the VAX in mind.
But unless my memory fails me, VAXBI have CPUs, memory and bus adapters
as the most common kind of devices (or "nodes" as I think they are
called) on BI. Which really helped when you started doing multiprocessor
systems, since you then had a shared bus for all memory and CPU
interconnect. You could also have Unibus adapters on VAXBI, but it was
never common.
And after VAXBI you got XMI. Which is also what early large Alphas used.
Also with CPUs, memory and bus adapters. And in XMI machines, you
usually have VAXBI adapters for peripherals, and of course on an XMI
machine, you would not hook up CPUs or memory on the VAXBI, so in an XMI
machine, VAXBI became de facto an I/O bus.
Not sure there is any relationship at all between VAXBI and PCI, by the way.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt@softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
2018-06-17 1:50 ` Johnny Billquist
@ 2018-06-17 10:33 ` Ronald Natalie
2018-06-20 16:55 ` Paul Winalski
1 sibling, 0 replies; 133+ messages in thread
From: Ronald Natalie @ 2018-06-17 10:33 UTC (permalink / raw)
To: TUHS main list
>
> Well, it is an easy observable fact that before the PDP-11/70, all PDP-11 models had their memory on the Unibus. So it was not only "an ability at the lower end", b
That’s not quite true. While the 18 bit addressed machines all had their memory directly accessible from the Unibus, the 45/50/55 had a separate bus for the (then new) semiconductor (bipolar or MOS) memory.
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
2018-06-17 1:50 ` Johnny Billquist
2018-06-17 10:33 ` Ronald Natalie
@ 2018-06-20 16:55 ` Paul Winalski
2018-06-20 21:35 ` Johnny Billquist
1 sibling, 1 reply; 133+ messages in thread
From: Paul Winalski @ 2018-06-20 16:55 UTC (permalink / raw)
To: Johnny Billquist; +Cc: tuhs
On 6/16/18, Johnny Billquist <bqt@update.uu.se> wrote:
>
> After Unibus (if we skip Q-bus) you had SBI, which was also used both
> for controllers (well, mostly bus adaptors) and memory, for the
> VAX-11/780. However, evaluation of where the bottleneck was on that
> machine led to the memory being moved away from SBI for the VAX-86x0
> machines. SBI never was used much for any direct controllers, but
> instead you had Unibus adapters, so here the Unibus was used as a pure
> I/O bus.
You forgot the Massbus--commonly used for disk I/O.
SBI on the 11/780 connected the CPU (CPUs, for the 11/782), memory
controller, Unibus controllers, Massbus controllers, and CI
controller.
CI (computer interconnect) was the communications medium for
VAXclusters (including the HSC50 intelligent disk controller).
DEC later introduced another interconnect system called BI, intended
to replace Unibus and Massbus.
As we used to say in our software group about all the various busses
and interconnects:
E Unibus Plurum
-Paul W.
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
2018-06-20 16:55 ` Paul Winalski
@ 2018-06-20 21:35 ` Johnny Billquist
2018-06-20 22:24 ` Paul Winalski
0 siblings, 1 reply; 133+ messages in thread
From: Johnny Billquist @ 2018-06-20 21:35 UTC (permalink / raw)
To: Paul Winalski; +Cc: tuhs
On 2018-06-20 18:55, Paul Winalski wrote:
> On 6/16/18, Johnny Billquist <bqt@update.uu.se> wrote:
>>
>> After Unibus (if we skip Q-bus) you had SBI, which was also used both
>> for controllers (well, mostly bus adaptors) and memory, for the
>> VAX-11/780. However, evaluation of where the bottleneck was on that
>> machine led to the memory being moved away from SBI for the VAX-86x0
>> machines. SBI never was used much for any direct controllers, but
>> instead you had Unibus adapters, so here the Unibus was used as a pure
>> I/O bus.
>
> You forgot the Massbus--commonly used for disk I/O.
I didn't try for pure I/O buses. But yes, the Massbus certainly also
existed.
> SBI on the 11/780 connected the CPU (CPUs, for the 11/782), memory
> controller, Unibus controllers, Massbus controllers, and CI
> controller.
Right.
> CI (computer interconnect) was the communications medium for
> VAXclusters (including the HSC50 intelligent disk controller).
But CI I wouldn't even call a bus.
> DEC later introduced another interconnect system called BI, intended
> to replace Unibus and Massbus.
I think that is where my comment started.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt@softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
@ 2018-06-16 13:49 Noel Chiappa
2018-06-16 14:10 ` Clem Cole
2018-06-16 14:10 ` Steve Nickolas
0 siblings, 2 replies; 133+ messages in thread
From: Noel Chiappa @ 2018-06-16 13:49 UTC (permalink / raw)
To: tuhs; +Cc: jnc
> From: Clem Cole
> The 8 pretty much had a base price in the $30k range in the mid to late
> 60s.
His statement was made in 1977 (ironically, the same year as the Apple
II).
(Not really that relevant, since he was apparently talking about 'smart
homes'; still, the history of DEC and personal computers is not a happy one;
perhaps why that quotation was taken up.)
> Later models used TTL and got down to a single 3U 'drawer'.
There was eventually a single-chip micro version, done in the mid-70's; it
was used in a number of DEC word-processing products.
Noel
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
2018-06-16 13:49 Noel Chiappa
@ 2018-06-16 14:10 ` Clem Cole
2018-06-16 14:34 ` Lawrence Stewart
2018-06-16 14:10 ` Steve Nickolas
1 sibling, 1 reply; 133+ messages in thread
From: Clem Cole @ 2018-06-16 14:10 UTC (permalink / raw)
To: Noel Chiappa; +Cc: The Eunuchs Hysterical Society
[-- Attachment #1: Type: text/plain, Size: 1311 bytes --]
Thanks, I thought it was about 10 years earlier. It means that the 16
bit systems were definitely the norm and the 32 bit system were well under
design and the micros already birthed. That said, as I pointed out in my
paper last summer, in 1977, a PDP-11 that was able to run UNIX (11/34 with
max memory) ran between $50-150K depending how it was configured and an
11/70 was closer to $250K. To scale, In 2017 dollars, we calculated that
comes to $208K/$622K/$1M and as I also pointed out, a graduate researcher
in those days cost about $5-$10K per year.
ᐧ
On Sat, Jun 16, 2018 at 9:49 AM, Noel Chiappa <jnc@mercury.lcs.mit.edu>
wrote:
> > From: Clem Cole
>
> > The 8 pretty much had a base price in the $30k range in the mid to
> late
> > 60s.
>
> His statement was made in 1977 (ironically, the same year as the Apple
> II).
>
> (Not really that relevant, since he was apparently talking about 'smart
> homes'; still, the history of DEC and personal computers is not a happy
> one;
> perhaps why that quotation was taken up.)
>
> > Later models used TTL and got down to a single 3U 'drawer'.
>
> There was eventually a single-chip micro version, done in the mid-70's; it
> was used in a number of DEC word-processing products.
>
> Noel
>
[-- Attachment #2: Type: text/html, Size: 2217 bytes --]
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
2018-06-16 14:10 ` Clem Cole
@ 2018-06-16 14:34 ` Lawrence Stewart
2018-06-16 15:28 ` Larry McVoy
0 siblings, 1 reply; 133+ messages in thread
From: Lawrence Stewart @ 2018-06-16 14:34 UTC (permalink / raw)
To: Clem Cole; +Cc: The Eunuchs Hysterical Society, Noel Chiappa
[-- Attachment #1: Type: text/plain, Size: 901 bytes --]
5-10K for a grad student seems low for the late ‘70s. I was an RA at Stanford then and they paid about $950/month. The loaded cost would have been about twice that.
My lab had an 11/34 with V7 and we considered ourselves quite well off.
-L
> On 2018, Jun 16, at 10:10 AM, Clem Cole <clemc@ccc.com> wrote:
>
> Thanks, I thought it was about 10 years earlier. It means that the 16 bit systems were definitely the norm and the 32 bit system were well under design and the micros already birthed. That said, as I pointed out in my paper last summer, in 1977, a PDP-11 that was able to run UNIX (11/34 with max memory) ran between $50-150K depending how it was configured and an 11/70 was closer to $250K. To scale, In 2017 dollars, we calculated that comes to $208K/$622K/$1M and as I also pointed out, a graduate researcher in those days cost about $5-$10K per year.
>
> ᐧ
[-- Attachment #2: Type: text/html, Size: 1971 bytes --]
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
2018-06-16 14:34 ` Lawrence Stewart
@ 2018-06-16 15:28 ` Larry McVoy
0 siblings, 0 replies; 133+ messages in thread
From: Larry McVoy @ 2018-06-16 15:28 UTC (permalink / raw)
To: Lawrence Stewart; +Cc: The Eunuchs Hysterical Society, Noel Chiappa
I was a grad student at UWisc in 1986 and got paid $16K/year.
On Sat, Jun 16, 2018 at 10:34:00AM -0400, Lawrence Stewart wrote:
> 5-10K for a grad student seems low for the late ???70s. I was an RA at Stanford then and they paid about $950/month. The loaded cost would have been about twice that.
>
> My lab had an 11/34 with V7 and we considered ourselves quite well off.
>
> -L
>
> > On 2018, Jun 16, at 10:10 AM, Clem Cole <clemc@ccc.com> wrote:
> >
> > Thanks, I thought it was about 10 years earlier. It means that the 16 bit systems were definitely the norm and the 32 bit system were well under design and the micros already birthed. That said, as I pointed out in my paper last summer, in 1977, a PDP-11 that was able to run UNIX (11/34 with max memory) ran between $50-150K depending how it was configured and an 11/70 was closer to $250K. To scale, In 2017 dollars, we calculated that comes to $208K/$622K/$1M and as I also pointed out, a graduate researcher in those days cost about $5-$10K per year.
> >
> > ???
>
--
---
Larry McVoy lm at mcvoy.com http://www.mcvoy.com/lm
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
2018-06-16 13:49 Noel Chiappa
2018-06-16 14:10 ` Clem Cole
@ 2018-06-16 14:10 ` Steve Nickolas
2018-06-16 14:13 ` William Pechter
1 sibling, 1 reply; 133+ messages in thread
From: Steve Nickolas @ 2018-06-16 14:10 UTC (permalink / raw)
To: Noel Chiappa; +Cc: tuhs
On Sat, 16 Jun 2018, Noel Chiappa wrote:
> > From: Clem Cole
>
> > The 8 pretty much had a base price in the $30k range in the mid to late
> > 60s.
>
> His statement was made in 1977 (ironically, the same year as the Apple
> II).
>
> (Not really that relevant, since he was apparently talking about 'smart
> homes'; still, the history of DEC and personal computers is not a happy one;
> perhaps why that quotation was taken up.)
>
> > Later models used TTL and got down to a single 3U 'drawer'.
>
> There was eventually a single-chip micro version, done in the mid-70's; it
> was used in a number of DEC word-processing products.
>
> Noel
>
Wasn't a one-chip PDP11 used by Tengen in a few arcade games, like
Paperboy?
-uso.
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
2018-06-16 14:10 ` Steve Nickolas
@ 2018-06-16 14:13 ` William Pechter
0 siblings, 0 replies; 133+ messages in thread
From: William Pechter @ 2018-06-16 14:13 UTC (permalink / raw)
To: tuhs, Steve Nickolas, Noel Chiappa; +Cc: tuhs
[-- Attachment #1: Type: text/plain, Size: 1039 bytes --]
Yup. IIRC they used the T11 -- the same chip in the Vax86xx front end.
Bill
On June 16, 2018 10:10:43 AM EDT, Steve Nickolas <usotsuki@buric.co> wrote:
>On Sat, 16 Jun 2018, Noel Chiappa wrote:
>
>> > From: Clem Cole
>>
>> > The 8 pretty much had a base price in the $30k range in the mid
>to late
>> > 60s.
>>
>> His statement was made in 1977 (ironically, the same year as the
>Apple
>> II).
>>
>> (Not really that relevant, since he was apparently talking about
>'smart
>> homes'; still, the history of DEC and personal computers is not a
>happy one;
>> perhaps why that quotation was taken up.)
>>
>> > Later models used TTL and got down to a single 3U 'drawer'.
>>
>> There was eventually a single-chip micro version, done in the
>mid-70's; it
>> was used in a number of DEC word-processing products.
>>
>> Noel
>>
>
>Wasn't a one-chip PDP11 used by Tengen in a few arcade games, like
>Paperboy?
>
>-uso.
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
[-- Attachment #2: Type: text/html, Size: 1850 bytes --]
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
@ 2018-06-16 13:37 Noel Chiappa
2018-06-16 18:59 ` Clem Cole
` (3 more replies)
0 siblings, 4 replies; 133+ messages in thread
From: Noel Chiappa @ 2018-06-16 13:37 UTC (permalink / raw)
To: tuhs; +Cc: jnc
> From: Dave Horsfall
> idiots keep repeating those "quotes" ... in some sort of an effort to
> make the so-called "experts" look silly; a form of reverse
> jealousy/snobbery or something? It really pisses me off
You've just managed to hit one of my hot buttons.
I can't speak to the motivations of everyone who repeats these stories, but my
professional career has been littered with examples of poor vision from
technical colleagues (some of whom should have known better), against which I
(in my role as an architect, which is necessarily somewhere where long-range
thinking is - or should be - a requirement) have struggled again and again -
sometimes successfully, more often, not.
So I chose those two only because they are well-known examples - but, as you
correctly point out, they are poor examples, for a variety of reasons. But
they perfectly illustrate something I am _all_ too familiar with, and which
happens _a lot_. And the original situation I was describing (the MIT core
patent) really happened - see "Memories that Shaped an Industry", page 211.
Examples of poor vision are legion - and more importantly, often/usually seen
to be such _at the time_ by some people - who were not listened to.
Let's start with the UNIBUS. Why does it have only 18 address lines? (I have
this vague memory of a quote from Gordon Bell admitting that was a mistake,
but I don't recall exactly where I saw it.) That very quickly became a major
limitation. I'm not sure why they did it (the number of contact on a standard
DEC connector probably had something do with it, connected to the fact that
the first PDP-11 had only 16-bit addressing anyway), but it should have been
obvious that it was not enough.
And a major one from the very start of my career: the decision to remove the
variable-length addresses from IPv3 and substitute the 32-bit addresses of
IPv4. Alas, I was too junior to be in the room that day (I was just down the
hall), but I've wished forever since that I was there to propose an alternate
path (the details of which I will skip) - that would have saved us all
countless billions (no, I am not exaggerating) spent on IPv6. Dave Reed was
pretty disgusted with that at the time, and history shows he was right.
Dave also tried to get a better checksum algorithm into TCP/UDP (I helped him
write the PDP-11 code to prove that it was plausible) but again, it got turned
down. (As did his wonderful idea for bottom-up address allocation, instead of
top-down. Oh well.) People have since discussed issues with the TCP/IP
checksum, but it's too late now to change it.
One place where I _did_ manage to win was in adding subnetting support to
hosts (in the Host Requirements WG); it was done the way I wanted, with the
result that when CIDR came along, even though it hadn't been forseen at the
time we did subnetting, it required _no_ hosts changes of any kind. But
mostly I lost. :-(
So, is poor vision common? All too common.
Noel
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
2018-06-16 13:37 Noel Chiappa
@ 2018-06-16 18:59 ` Clem Cole
2018-06-17 12:15 ` Derek Fawcus
` (2 subsequent siblings)
3 siblings, 0 replies; 133+ messages in thread
From: Clem Cole @ 2018-06-16 18:59 UTC (permalink / raw)
To: Noel Chiappa; +Cc: The Eunuchs Hysterical Society
[-- Attachment #1: Type: text/plain, Size: 8352 bytes --]
below...
On Sat, Jun 16, 2018 at 9:37 AM, Noel Chiappa <jnc@mercury.lcs.mit.edu>
wrote:
>
>
> I can't speak to the motivations of everyone who repeats these stories,
> but my
> professional career has been littered with examples of poor vision from
> technical colleagues (some of whom should have known better), against
> which I
> (in my role as an architect, which is necessarily somewhere where
> long-range
> thinking is - or should be - a requirement) have struggled again and again
> -
> sometimes successfully, more often, not.
>
Amen, although sadly many of us if not all of us have a few of these
stories. In fact, I'm fighting another one of these battles right now.🤔
My experience is that more often than not, it's less a failure to see what
a successful future might bring, and often one of well '*we don't need to
do that now/costs too much/we don't have the time*.'
That said, DEC was the epitome of the old line about perfection being the
enemy of success. I like to say to my colleagues, pick the the things
that are going to really matter. Make those perfect and bet the company on
them. But think in terms of what matters. As you point out, address size
issues are killers and you need to get those right at time t0.
Without saying too much, many firms like my own, think in terms of
computation (math libraries, cpu kernels), but frankly if I can not get the
data to/from CPU's functional units, or the data is stored in the wrong
place, or I have much of the main memory tied up in the OS managing
different types of user memory; it doesn't matter [HPC customers in
particular pay for getting a job done -- they really don't care how -- just
get it done and done fast].
To me, it becomes a matter of 'value' -- our HW folks know a crappy
computational system will doom the device, so that is what they put there
effort into building. My argument has often been that the messaging
systems, memory hierarchy and house keeping are what you have to get right
at this point. No amount of SW will fix HW that is lacking the right
support in those places (not that lots of computes are bad, but they are
actually not the big issue in the HPC when yet get down to it these days).
>
> Let's start with the UNIBUS. Why does it have only 18 address lines? (I
> have
> this vague memory of a quote from Gordon Bell admitting that was a mistake,
> but I don't recall exactly where I saw it.)
I think it was part of the same paper where he made the observation that
the greatest mistake an architecture can have is too few address bits.
My understanding is that the problem was that UNIBUS was perceived as an
I/O bus and as I was pointing out, the folks creating it/running the team
did not value it, so in the name of 'cost', more bits was not considered
important.
I used to know and work with the late Henk Schalke, who ran Unibus (HW)
engineering at DEC for many years. Henk was notoriously frugal (we might
even say 'cheap'), so I can imagine that he did not want to spend on
anything that he thought was wasteful. Just like I retold the
Amdahl/Brooks story of the 8-bit byte and Amdahl thinking Brooks was nuts;
I don't know for sure, but I can see that without someone really arguing
with Henk as to why 18 bits was not 'good enough.' I can imagine the
conversation going something like: Someone like me saying: *"Henk, 18 bits
is not going to cut it."* He might have replied something like: *"Bool
sheet *[a dutchman's way of cursing in English], *we already gave you two
more bit than you can address* (actually he'd then probably stop mid
sentence and translate in his head from Dutch to English - which was always
interesting when you argued with him).
Note: I'm not blaming Henk, just stating that his thinking was very much
that way, and I suspect he was not not alone. Only someone like Gordon and
the time could have overruled it, and I don't think the problems were
foreseen as Noel notes.
>
> And a major one from the very start of my career: the decision to remove
> the
> variable-length addresses from IPv3 and substitute the 32-bit addresses of
> IPv4.
>
I always wondered about the back story on that one. I do seem to remember
that there had been a proposal for variable-length addresses at one point;
but never knew why it was not picked. As you say, I was certainly way to
junior to have been part of that discussion. We just had some of the
document from you guys and we were told to try to implement it. My guess
is this is an example of folks thinking, length addressing was wasteful.
32-bits seemed infinite in those days and no body expected the network to
scale to the size it is today and will grow to in the future [I do remember
before Noel and team came up with ARP, somebody quipped that Xerox
Ethernet's 48-bits were too big and IP's 32-bit was too small. The
original hack I did was since we used 3Com board and they all shared the
upper 3 bytes of the MAC address to map the lower 24 to the IP address - we
were not connecting to the global network so it worked. Later we used a
look up table, until the ARP trick was created].
>
> One place where I _did_ manage to win was in adding subnetting support to
> hosts (in the Host Requirements WG); it was done the way I wanted, with the
> result that when CIDR came along, even though it hadn't been forseen at the
> time we did subnetting, it required _no_ hosts changes of any kind.
Amen and thank you.
> But
>
> mostly I lost. :-(
>
I know the feeling. To many battles that in hindsight you think - darn if
they had only listened. FWIW: if you try to mess with Intel OPA2 fabric
these days there is a back story. A few years ago I had a quite a battle
with the HW folks, but I won that one. The SW cannot tell the difference
between on-die or off-die, so the OS does not have the manage it. Huge
difference in OS performance and space efficiency. But I suspect that
there are some HW folks that spit on the floor when I come in the room.
We'll see if I am proven to be right in the long run; but at 1M cores I
don't want to think of the OS mess to manage two different types of memory
for the message system.
>
> So, is poor vision common? All too common.
Indeed. But to be fair, you can also end up with being like DEC and often
late to the market.
M
y example is of Alpha (and 64-bits vs 32-bit). No attempt to support
32-bit was really done because 64-bit was the future. Sr folks considered
32-bit mode was wasteful. The argument was that adding it was not only
technically not a good idea, but it would suck up engineering resources to
implement it in both HW and SW. Plus, we coming from the VAX so folks had
to recompile all the code anyway (god forbid that the SW might not be
64-bit clean mind you). [VMS did a few hacks, but Tru64 stayed 'clean.']
Similarly (back to UNIX theme for this mailing list) Tru64 was a rewrite
of OSF/1 - but hardly any OSF code was left in the DEC kernel by the time
it shipped. Each subsystem was rewritten/replaced to 'make it perfect'
[always with a good argument mind you, but never looking at the long term
issues]. Those two choices cost 3 years in market acceptance. By the
time, Alphas hit the street it did not matter. I think in both cases would
have been allowed Alpha to be better accepted if DEC had shipped earlier
with a few hacks, but them improved Tru64 as a better version was developed
(*i.e.* replace the memory system, the I/O system, the TTY handler, the FS
just to name a few that got rewritten from OSF/1 because folks thought they
were 'weak').
The trick in my mind, is to identify the real technical features you can
not fix later and get those right at the beginning. Then place the bet on
those features, and develop as fast as you can and do the best with them as
you you are able given your constraints. Then slowly over time improve the
things that mattered less at the beginning as you have a review stream.
If you wait for perfection, you get something like Alpha which was a great
architecture (particularly compared to INTEL*64) - but in the end, did not
matter.
Clem
ᐧ
[-- Attachment #2: Type: text/html, Size: 13046 bytes --]
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
2018-06-16 13:37 Noel Chiappa
2018-06-16 18:59 ` Clem Cole
@ 2018-06-17 12:15 ` Derek Fawcus
2018-06-17 17:33 ` Theodore Y. Ts'o
2018-06-18 12:36 ` Tony Finch
3 siblings, 0 replies; 133+ messages in thread
From: Derek Fawcus @ 2018-06-17 12:15 UTC (permalink / raw)
To: Noel Chiappa; +Cc: tuhs
On Sat, Jun 16, 2018 at 09:37:16AM -0400, Noel Chiappa wrote:
>
> And a major one from the very start of my career: the decision to remove the
> variable-length addresses from IPv3 and substitute the 32-bit addresses of
> IPv4.
Are you able to point to any document which still describes that variable length scheme?
I see that IEN 28 defines a variable length scheme (using version 2),
and that IEN 41 defines a different variable length scheme,
but is proposing to use version 4.
(IEN 44 looks a lot like the current IPv4).
DF
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
2018-06-16 13:37 Noel Chiappa
2018-06-16 18:59 ` Clem Cole
2018-06-17 12:15 ` Derek Fawcus
@ 2018-06-17 17:33 ` Theodore Y. Ts'o
2018-06-17 19:50 ` Jon Forrest
2018-06-18 14:56 ` Clem Cole
2018-06-18 12:36 ` Tony Finch
3 siblings, 2 replies; 133+ messages in thread
From: Theodore Y. Ts'o @ 2018-06-17 17:33 UTC (permalink / raw)
To: Noel Chiappa; +Cc: tuhs
On Sat, Jun 16, 2018 at 09:37:16AM -0400, Noel Chiappa wrote:
> I can't speak to the motivations of everyone who repeats these stories, but my
> professional career has been littered with examples of poor vision from
> technical colleagues (some of whom should have known better), against which I
> (in my role as an architect, which is necessarily somewhere where long-range
> thinking is - or should be - a requirement) have struggled again and again -
> sometimes successfully, more often, not....
>
> Examples of poor vision are legion - and more importantly, often/usually seen
> to be such _at the time_ by some people - who were not listened to.
To be fair, it's really easy to be wise to after the fact. Let's
start with Unix; Unix is very bare-bones, when other OS architects
wanted to add lots of features that were spurned for simplicity's
sake. Or we could compare X.500 versus LDAP, and X.400 and SMTP.
It's easy to mock decisions that weren't forward-thinking enough; but
it's also really easy to mock failed protocols and designs that
collapsed of their own weight because architects added too much "maybe
it will be useful in the future".
The architects that designed the original (never shipped) Microsoft
Vista thought they had great "vision". Unfortunately they added way
too much complexity and overhead to an operating system just in time
to watch the failure of Moore's law to be able to support all of that
overhead. Adding a database into the kernel and making it a
fundamental part of the file system? OK, stupid? How about adding
all sorts of complexity in VMS and network protocols to support
record-oriented files?
Sometimes having architects being successful to add their "vision" to
a product can be worst thing that ever happened to a operating sytsem
or, say, the entire OSI networking protocol suite.
> So, is poor vision common? All too common.
Definitely. The problem is it's hard to figure out in advance which
is poor vision versus brilliant engineering to cut down the design so
that it is "as simple as possible", but nevertheless, "as complex as
necessary".
- Ted
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
2018-06-17 17:33 ` Theodore Y. Ts'o
@ 2018-06-17 19:50 ` Jon Forrest
2018-06-18 14:56 ` Clem Cole
1 sibling, 0 replies; 133+ messages in thread
From: Jon Forrest @ 2018-06-17 19:50 UTC (permalink / raw)
To: tuhs
On 6/17/2018 10:33 AM, Theodore Y. Ts'o wrote:
> The architects that designed the original (never shipped) Microsoft
> Vista thought they had great "vision".
There's a very fine boundary between "vision" and "hallucination".
Jon
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
2018-06-17 17:33 ` Theodore Y. Ts'o
2018-06-17 19:50 ` Jon Forrest
@ 2018-06-18 14:56 ` Clem Cole
1 sibling, 0 replies; 133+ messages in thread
From: Clem Cole @ 2018-06-18 14:56 UTC (permalink / raw)
To: Theodore Y. Ts'o; +Cc: The Eunuchs Hysterical Society, Noel Chiappa
[-- Attachment #1: Type: text/plain, Size: 3493 bytes --]
On Sun, Jun 17, 2018 at 1:33 PM, Theodore Y. Ts'o <tytso@mit.edu> wrote:
> On Sat, Jun 16, 2018 at 09:37:16AM -0400, Noel Chiappa wrote:
> > I can't speak to the motivations of everyone who repeats these stories,
> but my
> > professional career has been littered with examples of poor vision from
> > technical colleagues (some of whom should have known better), against
> which I
> > (in my role as an architect, which is necessarily somewhere where
> long-range
> > thinking is - or should be - a requirement) have struggled again and
> again -
> > sometimes successfully, more often, not....
> >
> > Examples of poor vision are legion - and more importantly, often/usually
> seen
> > to be such _at the time_ by some people - who were not listened to.
>
> To be fair, it's really easy to be wise to after the fact. Let's
> start with Unix; Unix is very bare-bones, when other OS architects
> wanted to add lots of features that were spurned for simplicity's
> sake.
Amen brother. I refer to this as figuring out and understanding what
matters and what is really just window dressing. That is much easier to
do after the fact and for those of us that lived UNIX, we spent a lot of
time defending it. Many of the 'attacks' were from systems like VMS and
RSX that were thought to be more 'complete' or 'professional.'
> Or we could compare X.500 versus LDAP, and X.400 and SMTP.
>
Hmmm. I'll accept X.500, but SMTP I always said was hardly 'simple' -
although compared to what it replaced (FTPing files and remote execution)
is was.
>
> It's easy to mock decisions that weren't forward-thinking enough; but
> it's also really easy to mock failed protocols and designs that
> collapsed of their own weight because architects added too much "maybe
> it will be useful in the future".
>
+1
>
> ...
> Adding a database into the kernel and making it a
> fundamental part of the file system? OK, stupid? How about adding
> all sorts of complexity in VMS and network protocols to support
> record-oriented files?
>
tjt once put it well: 'It's not so bad that RMS has 250-1000 options, but
some has to check for each them on every IO.'
>
> Sometimes having architects being successful to add their "vision" to
> a product can be worst thing that ever happened to a operating sytsem
> or, say, the entire OSI networking protocol suite.
>
I'll always describe it as having 'good taste.' And part of 'good taste'
is learning what really works and what really does not. BTW: having good
taste in one thing does necessarily give you license in another area. And
I think that is a common issues. "Hey were were successfully here, we
must be genius..." Much of DEC's SW as good, but not all of it as an
example. Or to pick on my own BSD expereince, sendmail is a great example
of something that solved a problem we had, but boy do I wish Eric had not
screwed the SMTP Daemon into it ....
>
> > So, is poor vision common? All too common.
>
> Definitely. The problem is it's hard to figure out in advance which
> is poor vision versus brilliant engineering to cut down the design so
> that it is "as simple as possible", but nevertheless, "as complex as
> necessary".
Exactly... or as was said before: *as simple as possible, but not
simpler.*
* *But I like to add that understanding 'possible' is different from 'it
works.'* *
ᐧ
[-- Attachment #2: Type: text/html, Size: 6603 bytes --]
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
2018-06-16 13:37 Noel Chiappa
` (2 preceding siblings ...)
2018-06-17 17:33 ` Theodore Y. Ts'o
@ 2018-06-18 12:36 ` Tony Finch
3 siblings, 0 replies; 133+ messages in thread
From: Tony Finch @ 2018-06-18 12:36 UTC (permalink / raw)
To: Noel Chiappa; +Cc: tuhs
Noel Chiappa <jnc@mercury.lcs.mit.edu> wrote:
>
> (As did [Dave Reed's] wonderful idea for bottom-up address allocation,
> instead of top-down. Oh well.)
Was this written down anywhere?
Tony.
--
f.anthony.n.finch <dot@dotat.at> http://dotat.at/
the widest possible distribution of wealth
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
@ 2018-06-16 12:58 Doug McIlroy
2018-06-20 11:10 ` Dave Horsfall
0 siblings, 1 reply; 133+ messages in thread
From: Doug McIlroy @ 2018-06-16 12:58 UTC (permalink / raw)
To: tuhs
> jay forrester first described an invention called core memory in a lab
notebook 69 years ago today.
Core memory wiped out competing technologies (Williams tube, mercury
delay line, etc) almost instantly and ruled for over twent years. Yet
late in his life Forrester told me that the Whirlwind-connected
invention he was most proud of was marginal testing: running the
voltage up and down once a day to cause shaky vacuum tubes to
fail during scheduled maintenance rather than randomly during
operation. And indeed Wirlwind racked up a notable record of
reliability.
Doug
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
2018-06-16 12:58 Doug McIlroy
@ 2018-06-20 11:10 ` Dave Horsfall
0 siblings, 0 replies; 133+ messages in thread
From: Dave Horsfall @ 2018-06-20 11:10 UTC (permalink / raw)
To: The Eunuchs Hysterical Society
On Sat, 16 Jun 2018, Doug McIlroy wrote:
>> jay forrester first described an invention called core memory in a lab
>> notebook 69 years ago today.
>
> Core memory wiped out competing technologies (Williams tube [...]
Ah, the Williams tube. Was that not the one where women clerks (they
weren't called programmers) wearing nylon stockings were not allowed
anywhere near them, because of the static zaps thus generated? Or was
that yet another urban myth that appears to be polluting my history file?
How said women were determined to be wearing nylons and thus banned, well.
I have no idea... Hell, even a popular computer weekly in the 80s used to
joke that "software" was a female computer programmer; I'm sure that Rear
Admiral Grace Hopper (USN retd, decd) would've had a few words to say
about that sexism...
-- Dave
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
@ 2018-06-16 12:51 Noel Chiappa
2018-06-16 13:11 ` Clem cole
0 siblings, 1 reply; 133+ messages in thread
From: Noel Chiappa @ 2018-06-16 12:51 UTC (permalink / raw)
To: dave; +Cc: tuhs, jnc
> From: Dave Horsfall <dave@horsfall.org>
>> one of the Watson's saying there was a probably market for
>> <single-digit> of computers; Ken Olsen saying people wouldn't want
>> computers in their homes; etc, etc.
> I seem to recall reading somewhere that these were urban myths... Does
> anyone have actual references in their contexts?
Well, for the Watson one, there is some controversy:
https://en.wikipedia.org/wiki/Thomas_J._Watson#Famous_attribution
My guess is that he might actually have said it, and it was passed down orally
for a while before it was first written down. The thing is that he is alleged
to have said it in 1943, and there probably _was_ a market for only 5 of the
kind of computing devices available at that point (e.g. the Mark I).
> E.g. Watson was talking about the multi-megabuck 704/709/7094 etc
No. The 7094 is circa 1960, almost 20 years later.
> Olsens's quote was about the DEC-System 10...
Again, no. He did say it, but it was not about PDP-10s:
https://en.wikiquote.org/wiki/Ken_Olsen
"Olsen later explained that he was referring to smart homes rather than
personal computers." Which sounds plausible (in the sense of 'what he meant',
not 'it was correct'), given where he said it (a World Future Society
meeting).
Noel
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
2018-06-16 12:51 Noel Chiappa
@ 2018-06-16 13:11 ` Clem cole
0 siblings, 0 replies; 133+ messages in thread
From: Clem cole @ 2018-06-16 13:11 UTC (permalink / raw)
To: Noel Chiappa; +Cc: tuhs
And I believe at the time, KO was commenting in terms of Bell’s ‘minimal computer’ definition (the ‘mini’ - aka 12 bit systems) of the day -DEC’s PDP-8 not the 10. IIRC The 8 pretty much had a base price in the $30k range in the mid to late 60s. FWIW the original 8 was discrete bipolar (matched) transistors built with DEC flip chips. And physically about 2or3 19” relay racks in size. Later models used TTL and got down to a single 3U ‘drawer.’
Clem
Also please remember that originally, mini did not mean small. That was a computer press redo when the single chip, ‘micro’ computers, came to being in the mid 1970s
Sent from my PDP-7 Running UNIX V0 expect things to be almost but not quite.
On Jun 16, 2018, at 8:51 AM, Noel Chiappa <jnc@mercury.lcs.mit.edu> wrote:
>> From: Dave Horsfall <dave@horsfall.org>
>
>>> one of the Watson's saying there was a probably market for
>>> <single-digit> of computers; Ken Olsen saying people wouldn't want
>>> computers in their homes; etc, etc.
>
>> I seem to recall reading somewhere that these were urban myths... Does
>> anyone have actual references in their contexts?
>
> Well, for the Watson one, there is some controversy:
>
> https://en.wikipedia.org/wiki/Thomas_J._Watson#Famous_attribution
>
> My guess is that he might actually have said it, and it was passed down orally
> for a while before it was first written down. The thing is that he is alleged
> to have said it in 1943, and there probably _was_ a market for only 5 of the
> kind of computing devices available at that point (e.g. the Mark I).
>
>> E.g. Watson was talking about the multi-megabuck 704/709/7094 etc
>
> No. The 7094 is circa 1960, almost 20 years later.
>
>
>> Olsens's quote was about the DEC-System 10...
>
> Again, no. He did say it, but it was not about PDP-10s:
>
> https://en.wikiquote.org/wiki/Ken_Olsen
>
> "Olsen later explained that he was referring to smart homes rather than
> personal computers." Which sounds plausible (in the sense of 'what he meant',
> not 'it was correct'), given where he said it (a World Future Society
> meeting).
>
> Noel
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
@ 2018-06-15 15:25 Noel Chiappa
2018-06-15 23:05 ` Dave Horsfall
0 siblings, 1 reply; 133+ messages in thread
From: Noel Chiappa @ 2018-06-15 15:25 UTC (permalink / raw)
To: tuhs; +Cc: jnc
> From: "John P. Linderman"
> 4 bucks a bit!
When IBM went to license the core patent(s?) from MIT, they offered MIT a
choice of a large lump sump (the number US$20M sticks in my mind), or US$.01 a
bit.
The negotiators from MIT took the lump sum.
When I first heard this story, I thought it was corrupt folklore, but I later
found it in one of the IBM histories.
This story repets itself over and over again, though: one of the Watson's
saying there was a probably market for <single-digit> of computers; Ken Olsen
saying people wouldn't want computers in their homes; etc, etc.
Noel
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
2018-06-15 15:25 Noel Chiappa
@ 2018-06-15 23:05 ` Dave Horsfall
2018-06-15 23:22 ` Lyndon Nerenberg
0 siblings, 1 reply; 133+ messages in thread
From: Dave Horsfall @ 2018-06-15 23:05 UTC (permalink / raw)
To: The Eunuchs Hysterical Society
On Fri, 15 Jun 2018, Noel Chiappa wrote:
> This story repets itself over and over again, though: one of the
> Watson's saying there was a probably market for <single-digit> of
> computers; Ken Olsen saying people wouldn't want computers in their
> homes; etc, etc.
I seem to recall reading somewhere that these were urban myths... Does
anyone have actual references in their contexts?
E.g. Watson was talking about the multi-megabuck 704/709/7094 etc, and I
think that Olsens's quote was about the DEC-System 10...
-- Dave, who has been known to be wrong before
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
2018-06-15 23:05 ` Dave Horsfall
@ 2018-06-15 23:22 ` Lyndon Nerenberg
2018-06-16 6:36 ` Dave Horsfall
0 siblings, 1 reply; 133+ messages in thread
From: Lyndon Nerenberg @ 2018-06-15 23:22 UTC (permalink / raw)
To: Dave Horsfall; +Cc: The Eunuchs Hysterical Society
>> This story repets itself over and over again, though: one of the Watson's
>> saying there was a probably market for <single-digit> of computers; Ken
>> Olsen saying people wouldn't want computers in their homes; etc, etc.
>
> I seem to recall reading somewhere that these were urban myths... Does
> anyone have actual references in their contexts?
>
> E.g. Watson was talking about the multi-megabuck 704/709/7094 etc, and I
> think that Olsens's quote was about the DEC-System 10...
At the time, those were the only computers out there. The "personal
computer" was the myth of the day. Go back and look at the prices IBM and
DEC were charging for their gear. Even in the modern context, that
hardware was more expensive than the rediculously inflated prices of
housing in Vancouver.
--lyndon
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
2018-06-15 23:22 ` Lyndon Nerenberg
@ 2018-06-16 6:36 ` Dave Horsfall
2018-06-16 19:07 ` Clem Cole
0 siblings, 1 reply; 133+ messages in thread
From: Dave Horsfall @ 2018-06-16 6:36 UTC (permalink / raw)
To: The Eunuchs Hysterical Society
On Fri, 15 Jun 2018, Lyndon Nerenberg wrote:
>> E.g. Watson was talking about the multi-megabuck 704/709/7094 etc, and
>> I think that Olsen's quote was about the DEC-System 10...
>
> At the time, those were the only computers out there. The "personal
> computer" was the myth of the day. Go back and look at the prices IBM
> and DEC were charging for their gear. Even in the modern context, that
> hardware was more expensive than the rediculously inflated prices of
> housing in Vancouver.
Precisely, but idiots keep repeating those "quotes" (if they were repeated
accurately at all) in some sort of an effort to make the so-called
"experts" look silly; a form of reverse jealousy/snobbery or something?
It really pisses me off, and I'm sure that there's a medical term for
it...
I came across a web site that had these populist quotes, alongside the
original *taken in context*, but I'm damned if I can find it now.
I mean, how many people could afford a 7094, FFS? The power bill, the air
conditioning, the team of technicians, the warehouse of spare parts...
Hell, I'll bet that my iPhone has more power than our System-360/50, but
it has nowhere near the sheer I/O throughput of a mainframe :-)
-- Dave
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
2018-06-16 6:36 ` Dave Horsfall
@ 2018-06-16 19:07 ` Clem Cole
2018-06-18 9:25 ` Tim Bradshaw
0 siblings, 1 reply; 133+ messages in thread
From: Clem Cole @ 2018-06-16 19:07 UTC (permalink / raw)
To: Dave Horsfall; +Cc: The Eunuchs Hysterical Society
[-- Attachment #1: Type: text/plain, Size: 1781 bytes --]
On Sat, Jun 16, 2018 at 2:36 AM, Dave Horsfall <dave@horsfall.org> wrote:
> ...
>
> Hell, I'll bet that my iPhone has more power than our System-360/50, but
> it has nowhere near the sheer I/O throughput of a mainframe :-)
I sent Dave's comment to the chief designer of the Model 50 (my friend
Russ Robelen). His reply is cut/pasted below. For context he refers to a
ver rare book called: ‘IBM’s 360 and Early 370 Systems’ by Pugh, Johnson &
J.H.Palmer . A book and other other IBMers like Russ is considered the
biblical text on 360:
As to Dave’s comment: I'll bet that my iPhone has more power than our
System-360/50, but it has nowhere near the sheer I/O throughput of a
mainframe.
The ratio of bits of I/O to CPU MIPS was very high back in those days,
Particularly for the Model 50 which was considered a ‘Commercial machine’
vs a ‘Scientific machine’. The machine was doing payroll and inventory
management, high on I/O low on compute. Much different today even for an
iPhone. The latest iPhone runs on a ARMv8 derivative of Apple's "Swift" *dual
core *architecture called "Cyclone" and it runs at 1.3 GHz. The Mod 50 ran
at 2 MHz. The ARMv8 is a 64 bit machine. The Mod 50 was a 32 bit machine.
The mod 50 had no cache (memory ran at .5 Mhz). Depending on what
instruction mix you want to use I would put the iPhone at conservatively
1,500 times the Mod 50. I might add, a typical Mod 50 system with I/O sold
for $1M.
On the question of memory cost - this is from the bible on 360 mentioned
earlier.
*For example, the Model 50 main memory with a read-write cycle of 2
microseconds cost .8 cents per bit*.
*Page 194 Chapter 4 ‘IBM’s 360 and Early 370 Systems”*
ᐧ
[-- Attachment #2: Type: text/html, Size: 5818 bytes --]
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
2018-06-16 19:07 ` Clem Cole
@ 2018-06-18 9:25 ` Tim Bradshaw
2018-06-19 20:45 ` Peter Jeremy
0 siblings, 1 reply; 133+ messages in thread
From: Tim Bradshaw @ 2018-06-18 9:25 UTC (permalink / raw)
To: Clem Cole; +Cc: The Eunuchs Hysterical Society
Apropos of the 'my iPhone has more power than our System-360/50, but it has nowhere near the sheer I/O throughput of a mainframe' comment: there's obviously no doubt that devices like phones (and laptops, desktops &c) are I/O-starved compared to serious machines, but comparing the performance of an iPhone and a 360/50 seems to be a matter of choosing how fine the dust you want the 360/50 to be ground into should be.
The 360/50 could, I think, transfer 4 bytes every 2 microseconds to/from main memory, which is 20Mb/s. I've just measured my iPhone (6): it can do about 36Mb/s ... over WiFi, backed by a 4G cellular connection (this is towards the phone, the other direction is much worse). Over WiFi with something serious behind it it gets 70-80Mb/s in both directions. I have no idea what the raw WiFi bandwidth limit is, still less what the raw memory bandwidth is or its bandwidth to 'disk' (ie flash or whatever the storage in the phone is), but the phone has much more bandwidth over a partly wireless network to an endpoint tens or hundreds of miles away than the 360/50 had to main memory.
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
2018-06-18 9:25 ` Tim Bradshaw
@ 2018-06-19 20:45 ` Peter Jeremy
2018-06-19 22:55 ` David Arnold
0 siblings, 1 reply; 133+ messages in thread
From: Peter Jeremy @ 2018-06-19 20:45 UTC (permalink / raw)
To: Tim Bradshaw; +Cc: The Eunuchs Hysterical Society
[-- Attachment #1: Type: text/plain, Size: 985 bytes --]
On 2018-Jun-18 10:25:03 +0100, Tim Bradshaw <tfb@tfeb.org> wrote:
>Apropos of the 'my iPhone has more power than our System-360/50, but it has nowhere near the sheer I/O throughput of a mainframe' comment: there's obviously no doubt that devices like phones (and laptops, desktops &c) are I/O-starved compared to serious machines, but comparing the performance of an iPhone and a 360/50 seems to be a matter of choosing how fine the dust you want the 360/50 to be ground into should be.
>
>The 360/50 could, I think, transfer 4 bytes every 2 microseconds to/from main memory, which is 20Mb/s. I've just measured my iPhone (6): it can do about 36Mb/s ... over WiFi, backed by a 4G cellular connection
One way of looking at this actually backs up the claim: An iPhone has maybe
3 orders of magnitude more CPU power than a 360/50 but only a couple of
times the I/O bandwidth. So it's actually got maybe 2 orders of magnitude
less relative I/O throughput.
--
Peter Jeremy
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 963 bytes --]
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
2018-06-19 20:45 ` Peter Jeremy
@ 2018-06-19 22:55 ` David Arnold
2018-06-20 5:04 ` Peter Jeremy
0 siblings, 1 reply; 133+ messages in thread
From: David Arnold @ 2018-06-19 22:55 UTC (permalink / raw)
To: Peter Jeremy; +Cc: The Eunuchs Hysterical Society
Does the screen count as I/O?
I’d suggest that it’s just that the balance is (intentionally) quite different. If you squint right, a GPU could look like a channelized I/O controller.
d
> On 20 Jun 2018, at 06:45, Peter Jeremy <peter@rulingia.com> wrote:
>
>> On 2018-Jun-18 10:25:03 +0100, Tim Bradshaw <tfb@tfeb.org> wrote:
>> Apropos of the 'my iPhone has more power than our System-360/50, but it has nowhere near the sheer I/O throughput of a mainframe' comment: there's obviously no doubt that devices like phones (and laptops, desktops &c) are I/O-starved compared to serious machines, but comparing the performance of an iPhone and a 360/50 seems to be a matter of choosing how fine the dust you want the 360/50 to be ground into should be.
>>
>> The 360/50 could, I think, transfer 4 bytes every 2 microseconds to/from main memory, which is 20Mb/s. I've just measured my iPhone (6): it can do about 36Mb/s ... over WiFi, backed by a 4G cellular connection
>
> One way of looking at this actually backs up the claim: An iPhone has maybe
> 3 orders of magnitude more CPU power than a 360/50 but only a couple of
> times the I/O bandwidth. So it's actually got maybe 2 orders of magnitude
> less relative I/O throughput.
>
> --
> Peter Jeremy
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
2018-06-19 22:55 ` David Arnold
@ 2018-06-20 5:04 ` Peter Jeremy
2018-06-20 5:41 ` Warner Losh
0 siblings, 1 reply; 133+ messages in thread
From: Peter Jeremy @ 2018-06-20 5:04 UTC (permalink / raw)
To: David Arnold; +Cc: The Eunuchs Hysterical Society
[-- Attachment #1: Type: text/plain, Size: 939 bytes --]
On 2018-Jun-20 08:55:05 +1000, David Arnold <davida@pobox.com> wrote:
>Does the screen count as I/O?
I was thinking about that as well. 1080p30 video is around 2MBps as H.264 or
about 140MBps as 6bpp raw. The former is negligible, the latter is still shy
of the disparity in CPU power, especially if you take into account the GPU
power needed to do the decoding.
>I’d suggest that it’s just that the balance is (intentionally) quite different. If you squint right, a GPU could look like a channelized I/O controller.
I agree. Even back then, there was a difference between commercial-oriented
mainframes (the 1401 and 360/50 lineage - which stressed lots of I/O) and the
scientific mainframes (709x, 360/85 - which stressed arithmetic capabilities).
One, not too inaccurate, description of the BCM2835 (RPi SoC) is that it's a
GPU and the sole job of the CPU is to push data into the GPU.
--
Peter Jeremy
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 963 bytes --]
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
2018-06-20 5:04 ` Peter Jeremy
@ 2018-06-20 5:41 ` Warner Losh
0 siblings, 0 replies; 133+ messages in thread
From: Warner Losh @ 2018-06-20 5:41 UTC (permalink / raw)
To: Peter Jeremy; +Cc: The Eunuchs Hysterical Society
[-- Attachment #1: Type: text/plain, Size: 1030 bytes --]
On Tue, Jun 19, 2018 at 11:04 PM, Peter Jeremy <peter@rulingia.com> wrote:
> On 2018-Jun-20 08:55:05 +1000, David Arnold <davida@pobox.com> wrote:
> >Does the screen count as I/O?
>
> I was thinking about that as well. 1080p30 video is around 2MBps as H.264
> or
> about 140MBps as 6bpp raw. The former is negligible, the latter is still
> shy
> of the disparity in CPU power, especially if you take into account the GPU
> power needed to do the decoding.
>
> >I’d suggest that it’s just that the balance is (intentionally) quite
> different. If you squint right, a GPU could look like a channelized I/O
> controller.
>
> I agree. Even back then, there was a difference between
> commercial-oriented
> mainframes (the 1401 and 360/50 lineage - which stressed lots of I/O) and
> the
> scientific mainframes (709x, 360/85 - which stressed arithmetic
> capabilities).
>
So what could an old mainframe do as far as I/O was concerned? Google
didn't provide me a straight forward answer...
Warner
[-- Attachment #2: Type: text/html, Size: 1476 bytes --]
^ permalink raw reply [flat|nested] 133+ messages in thread
* [TUHS] core
@ 2018-06-15 11:19 A. P. Garcia
2018-06-15 13:50 ` Steffen Nurpmeso
` (2 more replies)
0 siblings, 3 replies; 133+ messages in thread
From: A. P. Garcia @ 2018-06-15 11:19 UTC (permalink / raw)
To: tuhs
[-- Attachment #1: Type: text/plain, Size: 127 bytes --]
jay forrester first described an invention called core memory in a lab
notebook 69 years ago today.
kill -3 mailx
core dumped
[-- Attachment #2: Type: text/html, Size: 238 bytes --]
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
2018-06-15 11:19 A. P. Garcia
@ 2018-06-15 13:50 ` Steffen Nurpmeso
2018-06-15 14:21 ` Clem Cole
2018-06-20 10:06 ` Dave Horsfall
2 siblings, 0 replies; 133+ messages in thread
From: Steffen Nurpmeso @ 2018-06-15 13:50 UTC (permalink / raw)
To: tuhs
A. P. Garcia wrote in <CAFCBnZshnexAs8WH8HQmCXU88gLxiW4=FJ2EmnC9ge283ivA\
Xg@mail.gmail.com>:
|jay forrester first described an invention called core memory in a \
|lab notebook 69 years ago today.
|
|kill -3 mailx
|
|core dumped
No no, that will not die. It says "Quit", and then exits.
But maybe it will be able to create text/html via MIME alternative
directly in a few years from now on. In general i am talking
about my own maintained branch here only, of course.
--End of <CAFCBnZshnexAs8WH8HQmCXU88gLxiW4=FJ2EmnC9ge283ivAXg@mail.gmail\
.com>
A nice weekend everybody, may it overtrump the former.
--steffen
|
|Der Kragenbaer, The moon bear,
|der holt sich munter he cheerfully and one by one
|einen nach dem anderen runter wa.ks himself off
|(By Robert Gernhardt)
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
2018-06-15 11:19 A. P. Garcia
2018-06-15 13:50 ` Steffen Nurpmeso
@ 2018-06-15 14:21 ` Clem Cole
2018-06-15 15:11 ` John P. Linderman
2018-06-16 1:08 ` Greg 'groggy' Lehey
2018-06-20 10:06 ` Dave Horsfall
2 siblings, 2 replies; 133+ messages in thread
From: Clem Cole @ 2018-06-15 14:21 UTC (permalink / raw)
To: A. P. Garcia; +Cc: The Eunuchs Hysterical Society
[-- Attachment #1: Type: text/plain, Size: 930 bytes --]
On Fri, Jun 15, 2018 at 7:19 AM, A. P. Garcia <a.phillip.garcia@gmail.com>
wrote:
> jay forrester first described an invention called core memory in a lab
> notebook 69 years ago today.
>
Be careful -- Forrester named it and put it into an array and build a
random access memory with it, but An Wang invented and patented basic
technology we now call 'core' in 1955 2,708,722
<https://patents.google.com/patent/US2708722A/en> (calling it `dynamic
memory'). As I understand it (I'm old, but not that old so I can not
speak from experience, as I was a youngling when this patent came about),
Wang thought Forrester's use of his idea was great, but Wang's patent was
the broader one.
There is an interesting history of Wang and his various fights at: An Wang
- The Man Who Might Have Invented The Personal Computer
<http://www.i-programmer.info/history/people/550-an-wang-wang-laboratories.html>
ᐧ
[-- Attachment #2: Type: text/html, Size: 2020 bytes --]
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
2018-06-15 14:21 ` Clem Cole
@ 2018-06-15 15:11 ` John P. Linderman
2018-06-15 15:21 ` Larry McVoy
2018-06-16 1:08 ` Greg 'groggy' Lehey
1 sibling, 1 reply; 133+ messages in thread
From: John P. Linderman @ 2018-06-15 15:11 UTC (permalink / raw)
To: Clem Cole; +Cc: The Eunuchs Hysterical Society
[-- Attachment #1: Type: text/plain, Size: 1276 bytes --]
Excerpt from the article Clem pointed to:
The company was called Wang Laboratories and it specialised in magnetic
memories. He used his contacts to sell magnetic cores which he built and
sold for $4 each.
4 bucks a bit!
On Fri, Jun 15, 2018 at 10:21 AM, Clem Cole <clemc@ccc.com> wrote:
>
>
> On Fri, Jun 15, 2018 at 7:19 AM, A. P. Garcia <a.phillip.garcia@gmail.com>
> wrote:
>
>> jay forrester first described an invention called core memory in a lab
>> notebook 69 years ago today.
>>
> Be careful -- Forrester named it and put it into an array and build a
> random access memory with it, but An Wang invented and patented basic
> technology we now call 'core' in 1955 2,708,722
> <https://patents.google.com/patent/US2708722A/en> (calling it `dynamic
> memory'). As I understand it (I'm old, but not that old so I can not
> speak from experience, as I was a youngling when this patent came about),
> Wang thought Forrester's use of his idea was great, but Wang's patent was
> the broader one.
>
> There is an interesting history of Wang and his various fights at: An
> Wang - The Man Who Might Have Invented The Personal Computer
> <http://www.i-programmer.info/history/people/550-an-wang-wang-laboratories.html>
>
> ᐧ
>
[-- Attachment #2: Type: text/html, Size: 3660 bytes --]
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
2018-06-15 15:11 ` John P. Linderman
@ 2018-06-15 15:21 ` Larry McVoy
0 siblings, 0 replies; 133+ messages in thread
From: Larry McVoy @ 2018-06-15 15:21 UTC (permalink / raw)
To: John P. Linderman; +Cc: The Eunuchs Hysterical Society
And today it is $.000000009 / bit.
On Fri, Jun 15, 2018 at 11:11:19AM -0400, John P. Linderman wrote:
> Excerpt from the article Clem pointed to:
>
> The company was called Wang Laboratories and it specialised in magnetic
> memories. He used his contacts to sell magnetic cores which he built and
> sold for $4 each.
>
>
> 4 bucks a bit!
>
> On Fri, Jun 15, 2018 at 10:21 AM, Clem Cole <clemc@ccc.com> wrote:
>
> >
> >
> > On Fri, Jun 15, 2018 at 7:19 AM, A. P. Garcia <a.phillip.garcia@gmail.com>
> > wrote:
> >
> >> jay forrester first described an invention called core memory in a lab
> >> notebook 69 years ago today.
> >>
> > ???Be careful -- Forrester named it and put it into an array and build a
> > random access memory with it, but An Wang invented and patented basic
> > technology we now call 'core' in 1955 2,708,722
> > <https://patents.google.com/patent/US2708722A/en> (calling it `dynamic
> > memory')???. As I understand it (I'm old, but not that old so I can not
> > speak from experience, as I was a youngling when this patent came about),
> > Wang thought Forrester's use of his idea was great, but Wang's patent was
> > the broader one.
> >
> > There is an interesting history of Wang and his various fights at: An
> > Wang - The Man Who Might Have Invented The Personal Computer
> > <http://www.i-programmer.info/history/people/550-an-wang-wang-laboratories.html>
> >
> > ???
> >
--
---
Larry McVoy lm at mcvoy.com http://www.mcvoy.com/lm
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
2018-06-15 14:21 ` Clem Cole
2018-06-15 15:11 ` John P. Linderman
@ 2018-06-16 1:08 ` Greg 'groggy' Lehey
2018-06-16 2:00 ` Nemo Nusquam
` (2 more replies)
1 sibling, 3 replies; 133+ messages in thread
From: Greg 'groggy' Lehey @ 2018-06-16 1:08 UTC (permalink / raw)
To: Clem Cole; +Cc: The Eunuchs Hysterical Society
[-- Attachment #1: Type: text/plain, Size: 1041 bytes --]
On Friday, 15 June 2018 at 10:21:44 -0400, Clem Cole wrote:
> On Fri, Jun 15, 2018 at 7:19 AM, A. P. Garcia <a.phillip.garcia@gmail.com>
> wrote:
>
>> jay forrester first described an invention called core memory in a lab
>> notebook 69 years ago today.
>>
> ???Be careful -- Forrester named it and put it into an array and build a
> random access memory with it, but An Wang invented and patented basic
> technology we now call 'core' in 1955 2,708,722
> <https://patents.google.com/patent/US2708722A/en> (calling it `dynamic
> memory')???.
Tha patent may date from 1955, but by that time it was already in use.
Whirlwind I used it in 1949, and the IBM 704 (1954) used it for main
memory. There are some interesting photos at
https://en.wikipedia.org/wiki/Magnetic-core_memory.
Greg
--
Sent from my desktop computer.
Finger grog@lemis.com for PGP public key.
See complete headers for address and phone numbers.
This message is digitally signed. If your Microsoft mail program
reports problems, please read http://lemis.com/broken-MUA
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 163 bytes --]
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
2018-06-16 1:08 ` Greg 'groggy' Lehey
@ 2018-06-16 2:00 ` Nemo Nusquam
2018-06-16 2:17 ` John P. Linderman
2018-06-16 3:06 ` Clem cole
2 siblings, 0 replies; 133+ messages in thread
From: Nemo Nusquam @ 2018-06-16 2:00 UTC (permalink / raw)
To: tuhs
On 06/15/18 21:08, Greg 'groggy' Lehey wrote:
> On Friday, 15 June 2018 at 10:21:44 -0400, Clem Cole wrote:
>> On Fri, Jun 15, 2018 at 7:19 AM, A. P. Garcia <a.phillip.garcia@gmail.com>
>> wrote:
>>
>>> jay forrester first described an invention called core memory in a lab
>>> notebook 69 years ago today.
>>>
>> ???Be careful -- Forrester named it and put it into an array and build a
>> random access memory with it, but An Wang invented and patented basic
>> technology we now call 'core' in 1955 2,708,722
>> <https://patents.google.com/patent/US2708722A/en> (calling it `dynamic
>> memory')???.
>
> Tha patent may date from 1955,
The patent issued in 1955 but the priority date is listed as 1949-10-21.
N.
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
2018-06-16 1:08 ` Greg 'groggy' Lehey
2018-06-16 2:00 ` Nemo Nusquam
@ 2018-06-16 2:17 ` John P. Linderman
2018-06-16 3:06 ` Clem cole
2 siblings, 0 replies; 133+ messages in thread
From: John P. Linderman @ 2018-06-16 2:17 UTC (permalink / raw)
To: Greg 'groggy' Lehey; +Cc: The Eunuchs Hysterical Society
[-- Attachment #1: Type: text/plain, Size: 1506 bytes --]
Forrester was my freshman adviser. A remarkably nice person. For those of
us who weren't going home for Thanksgiving, he invited us to his home. Well
above the call of duty. It's my understanding that his patent wasn't
anywhere near the biggest moneymaker. A patent for the production of
penicillin was, as I understood, the biggie.
On Fri, Jun 15, 2018 at 9:08 PM, Greg 'groggy' Lehey <grog@lemis.com> wrote:
> On Friday, 15 June 2018 at 10:21:44 -0400, Clem Cole wrote:
> > On Fri, Jun 15, 2018 at 7:19 AM, A. P. Garcia <
> a.phillip.garcia@gmail.com>
> > wrote:
> >
> >> jay forrester first described an invention called core memory in a lab
> >> notebook 69 years ago today.
> >>
> > ???Be careful -- Forrester named it and put it into an array and build a
> > random access memory with it, but An Wang invented and patented basic
> > technology we now call 'core' in 1955 2,708,722
> > <https://patents.google.com/patent/US2708722A/en> (calling it `dynamic
> > memory')???.
>
> Tha patent may date from 1955, but by that time it was already in use.
> Whirlwind I used it in 1949, and the IBM 704 (1954) used it for main
> memory. There are some interesting photos at
> https://en.wikipedia.org/wiki/Magnetic-core_memory.
>
> Greg
> --
> Sent from my desktop computer.
> Finger grog@lemis.com for PGP public key.
> See complete headers for address and phone numbers.
> This message is digitally signed. If your Microsoft mail program
> reports problems, please read http://lemis.com/broken-MUA
>
[-- Attachment #2: Type: text/html, Size: 2442 bytes --]
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
2018-06-16 1:08 ` Greg 'groggy' Lehey
2018-06-16 2:00 ` Nemo Nusquam
2018-06-16 2:17 ` John P. Linderman
@ 2018-06-16 3:06 ` Clem cole
2 siblings, 0 replies; 133+ messages in thread
From: Clem cole @ 2018-06-16 3:06 UTC (permalink / raw)
To: Greg 'groggy' Lehey; +Cc: The Eunuchs Hysterical Society
Greg- Sorry if my words read and came to be interpreted to imply 1955 was the date of the invention. I only wanted clarify that Wang had the idea of magnetic (core) memory before Forrester. 1955 is the date of patent grant as you mentioned. My primary point was to be careful about giving all the credit to Forrester. It took both as I understand the history. Again I was not part of that fight 😉
It seemed the courts have said what I mentioned - it was Wang’s original idea and I just wanted the history stayed more clearly. That said, obviously Forrester improved on it (made it practical). And IBM needed licenses for both btw build their products.
FWIW: I’ve been on a couple corporate patent committees. I called him on the comment because I knew that we use core history sometimes as an example when we try to teach young inventors how develop a proper invention disclosure (and how I was taught about some of these issues). What Forrester did I have seen used as an example of an improvement and differentiation from a previous idea. That said, Wang had the fundamental patent and Forrester needed to rely on Wang’s idea to “reduce to practice” his own. As I said, IBM needed to license both in the end to make a product.
What we try to teach is how something is new (novel in patent-speak) and to make sure they disclose what there ideas are built upon. And as importantly, are you truly novel and if you are - can you build on that previous idea without a license.
This is actually pretty important to get right and is not just an academic exercise. For instance, a few years ago I was granted a fundamental computer synchronization patent for use in building supercomputers out of smaller computers (ie clusters). When we wrote the stuff for disclosure we had to show how what was the same, what was built upon, and what made it new/different. Computer synchronization is an old idea but what we did was quite new (and frankly would not have been considered in the 60s as those designers did have some of issues in scale we have today). But because we were able to show both the base and how it was novel, the application went right through in both the USA and Internationally.
So back to my point, Forrester came up with the idea and practical scheme to use Wang’s concept of the magnetic memory in an array, which as I understand it, was what made Wang’s idea practical. Just as my scheme did not invent synchronization, but tradition 1960 style schemes are impractical with today’s technology.
Sent from my PDP-7 Running UNIX V0 expect things to be almost but not quite.
> On Jun 15, 2018, at 9:08 PM, Greg 'groggy' Lehey <grog@lemis.com> wrote:
>
>> On Friday, 15 June 2018 at 10:21:44 -0400, Clem Cole wrote:
>> On Fri, Jun 15, 2018 at 7:19 AM, A. P. Garcia <a.phillip.garcia@gmail.com>
>> wrote:
>>
>>> jay forrester first described an invention called core memory in a lab
>>> notebook 69 years ago today.
>>>
>> ???Be careful -- Forrester named it and put it into an array and build a
>> random access memory with it, but An Wang invented and patented basic
>> technology we now call 'core' in 1955 2,708,722
>> <https://patents.google.com/patent/US2708722A/en> (calling it `dynamic
>> memory')???.
>
> Tha patent may date from 1955, but by that time it was already in use.
> Whirlwind I used it in 1949, and the IBM 704 (1954) used it for main
> memory. There are some interesting photos at
> https://en.wikipedia.org/wiki/Magnetic-core_memory.
>
> Greg
> --
> Sent from my desktop computer.
> Finger grog@lemis.com for PGP public key.
> See complete headers for address and phone numbers.
> This message is digitally signed. If your Microsoft mail program
> reports problems, please read http://lemis.com/broken-MUA
^ permalink raw reply [flat|nested] 133+ messages in thread
* Re: [TUHS] core
2018-06-15 11:19 A. P. Garcia
2018-06-15 13:50 ` Steffen Nurpmeso
2018-06-15 14:21 ` Clem Cole
@ 2018-06-20 10:06 ` Dave Horsfall
2 siblings, 0 replies; 133+ messages in thread
From: Dave Horsfall @ 2018-06-20 10:06 UTC (permalink / raw)
To: The Eunuchs Hysterical Society
(Yeah, I'm a bit behind right now; I got involved in some electronics stuff
for a change.)
On Fri, 15 Jun 2018, A. P. Garcia wrote:
> jay forrester first described an invention called core memory in a lab
> notebook 69 years ago today.
Odd, I don't have that in my history notes; may I use it?
-- Dave
^ permalink raw reply [flat|nested] 133+ messages in thread
end of thread, other threads:[~2018-06-28 14:31 UTC | newest]
Thread overview: 133+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-06-21 22:44 [TUHS] core Nelson H. F. Beebe
2018-06-21 23:07 ` Grant Taylor via TUHS
2018-06-21 23:38 ` Toby Thain
2018-06-21 23:47 ` [TUHS] off-topic list Warren Toomey
2018-06-22 1:11 ` Grant Taylor via TUHS
2018-06-22 3:53 ` Robert Brockway
2018-06-22 4:18 ` Dave Horsfall
2018-06-22 11:44 ` Arthur Krewat
2018-06-22 14:28 ` Larry McVoy
2018-06-22 14:46 ` Tim Bradshaw
2018-06-22 14:54 ` Larry McVoy
2018-06-22 15:17 ` Steffen Nurpmeso
2018-06-22 17:27 ` Grant Taylor via TUHS
2018-06-22 19:25 ` Steffen Nurpmeso
2018-06-22 21:04 ` Grant Taylor via TUHS
2018-06-23 14:49 ` Steffen Nurpmeso
2018-06-23 15:25 ` Toby Thain
2018-06-23 18:49 ` Grant Taylor via TUHS
2018-06-23 21:05 ` Tom Ivar Helbekkmo via TUHS
2018-06-23 21:21 ` Michael Parson
2018-06-23 23:31 ` Grant Taylor via TUHS
2018-06-23 23:36 ` Larry McVoy
2018-06-23 23:37 ` Larry McVoy
2018-06-24 0:20 ` Grant Taylor via TUHS
2018-06-25 2:53 ` Dave Horsfall
2018-06-25 5:40 ` Grant Taylor via TUHS
2018-06-25 6:15 ` arnold
2018-06-25 7:27 ` Bakul Shah
2018-06-25 12:52 ` Michael Parson
2018-06-25 13:41 ` arnold
2018-06-25 13:56 ` arnold
2018-06-25 13:59 ` Adam Sampson
2018-06-25 15:05 ` Grant Taylor via TUHS
2018-06-26 9:05 ` Derek Fawcus
2018-06-28 14:25 ` [TUHS] email filtering (was Re: off-topic list) Perry E. Metzger
2018-06-23 22:38 ` [TUHS] off-topic list Steffen Nurpmeso
2018-06-24 0:18 ` Grant Taylor via TUHS
2018-06-24 10:04 ` Michael Kjörling
2018-06-25 16:10 ` Steffen Nurpmeso
2018-06-25 18:48 ` Grant Taylor via TUHS
2018-06-25 0:43 ` [TUHS] mail (Re: " Bakul Shah
2018-06-25 1:15 ` Lyndon Nerenberg
2018-06-25 2:44 ` George Michaelson
2018-06-25 3:04 ` Larry McVoy
2018-06-25 3:15 ` Bakul Shah
2018-06-25 16:26 ` Steffen Nurpmeso
2018-06-25 18:59 ` Grant Taylor via TUHS
2018-06-25 14:18 ` [TUHS] " Clem Cole
2018-06-25 15:28 ` [TUHS] off-topic list [ really mh ] Jon Steinhart
2018-06-26 7:49 ` Ralph Corderoy
2018-06-25 15:51 ` [TUHS] off-topic list Steffen Nurpmeso
2018-06-25 18:21 ` Grant Taylor via TUHS
2018-06-26 20:38 ` Steffen Nurpmeso
2018-06-22 16:07 ` Tim Bradshaw
2018-06-22 16:36 ` Steve Johnson
2018-06-22 20:55 ` Bakul Shah
2018-06-22 14:52 ` Ralph Corderoy
2018-06-22 15:13 ` SPC
2018-06-22 16:45 ` Larry McVoy
2018-06-22 15:28 ` Clem Cole
2018-06-22 17:17 ` Grant Taylor via TUHS
2018-06-22 18:00 ` Dan Cross
2018-06-22 17:29 ` Cág
-- strict thread matches above, loose matches on Subject: below --
2018-06-21 17:40 [TUHS] core Noel Chiappa
2018-06-21 17:07 Nelson H. F. Beebe
2018-06-21 13:46 Doug McIlroy
2018-06-21 16:13 ` Tim Bradshaw
2018-06-20 20:11 Doug McIlroy
2018-06-20 23:53 ` Tim Bradshaw
2018-06-19 12:23 Noel Chiappa
2018-06-19 12:58 ` Clem Cole
2018-06-19 13:53 ` John P. Linderman
2018-06-21 14:09 ` Paul Winalski
2018-06-19 11:50 Doug McIlroy
[not found] <20180618121738.98BD118C087@mercury.lcs.mit.edu>
2018-06-18 21:13 ` Johnny Billquist
2018-06-18 17:56 Noel Chiappa
2018-06-18 18:51 ` Clem Cole
2018-06-18 14:51 Noel Chiappa
2018-06-18 14:58 ` Warner Losh
[not found] <mailman.1.1529287201.13697.tuhs@minnie.tuhs.org>
2018-06-18 5:58 ` Johnny Billquist
2018-06-18 12:39 ` Ronald Natalie
2018-06-17 21:18 Noel Chiappa
2018-06-17 17:58 Noel Chiappa
2018-06-18 9:16 ` Tim Bradshaw
2018-06-18 15:33 ` Tim Bradshaw
2018-06-17 14:36 Noel Chiappa
2018-06-17 15:58 ` Derek Fawcus
2018-06-16 22:57 Noel Chiappa
[not found] <mailman.1.1529175600.3826.tuhs@minnie.tuhs.org>
2018-06-16 22:14 ` Johnny Billquist
2018-06-16 22:38 ` Arthur Krewat
2018-06-17 0:15 ` Clem cole
2018-06-17 1:50 ` Johnny Billquist
2018-06-17 10:33 ` Ronald Natalie
2018-06-20 16:55 ` Paul Winalski
2018-06-20 21:35 ` Johnny Billquist
2018-06-20 22:24 ` Paul Winalski
2018-06-16 13:49 Noel Chiappa
2018-06-16 14:10 ` Clem Cole
2018-06-16 14:34 ` Lawrence Stewart
2018-06-16 15:28 ` Larry McVoy
2018-06-16 14:10 ` Steve Nickolas
2018-06-16 14:13 ` William Pechter
2018-06-16 13:37 Noel Chiappa
2018-06-16 18:59 ` Clem Cole
2018-06-17 12:15 ` Derek Fawcus
2018-06-17 17:33 ` Theodore Y. Ts'o
2018-06-17 19:50 ` Jon Forrest
2018-06-18 14:56 ` Clem Cole
2018-06-18 12:36 ` Tony Finch
2018-06-16 12:58 Doug McIlroy
2018-06-20 11:10 ` Dave Horsfall
2018-06-16 12:51 Noel Chiappa
2018-06-16 13:11 ` Clem cole
2018-06-15 15:25 Noel Chiappa
2018-06-15 23:05 ` Dave Horsfall
2018-06-15 23:22 ` Lyndon Nerenberg
2018-06-16 6:36 ` Dave Horsfall
2018-06-16 19:07 ` Clem Cole
2018-06-18 9:25 ` Tim Bradshaw
2018-06-19 20:45 ` Peter Jeremy
2018-06-19 22:55 ` David Arnold
2018-06-20 5:04 ` Peter Jeremy
2018-06-20 5:41 ` Warner Losh
2018-06-15 11:19 A. P. Garcia
2018-06-15 13:50 ` Steffen Nurpmeso
2018-06-15 14:21 ` Clem Cole
2018-06-15 15:11 ` John P. Linderman
2018-06-15 15:21 ` Larry McVoy
2018-06-16 1:08 ` Greg 'groggy' Lehey
2018-06-16 2:00 ` Nemo Nusquam
2018-06-16 2:17 ` John P. Linderman
2018-06-16 3:06 ` Clem cole
2018-06-20 10:06 ` Dave Horsfall
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).