The Unix Heritage Society mailing list
 help / color / mirror / Atom feed
* Re: [TUHS] OSI stack (Was: Posters)
@ 2019-02-03 18:49 Norman Wilson
  0 siblings, 0 replies; 25+ messages in thread
From: Norman Wilson @ 2019-02-03 18:49 UTC (permalink / raw)
  To: tuhs

Noel Chiappa:

  {And apologies for the non-Unix content, but at least it's about computers,
  unlike all the postings about Jimmy Page's guitar; typical of the really poor
  S/N on this list.)

======

Didn't Jimmy Page's guitar use an LSI-11 running Lycklama's Mini-UNIX?
And what was his page size?

Norman Wilson
Toronto ON

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [TUHS] OSI stack (Was: Posters)
@ 2019-02-07 19:04 Noel Chiappa
  0 siblings, 0 replies; 25+ messages in thread
From: Noel Chiappa @ 2019-02-07 19:04 UTC (permalink / raw)
  To: tuhs; +Cc: jnc

    > From: Grant Taylor via TUHS <tuhs@minnie.tuhs.org>

    > Seeing as how this is diverging from TUHS, I'd encourage replies to
    > the COFF copy that I'm CCing.

Can people _please_ pick either one list _or_ the other to reply to, so those
on both will stop getting two copies of every message? My mailbox is exploding!

   Noel

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [TUHS] OSI stack (Was: Posters)
  2019-02-07 18:07   ` Grant Taylor via TUHS
  2019-02-07 18:22     ` Andy Kosela
@ 2019-02-07 18:50     ` Larry McVoy
  1 sibling, 0 replies; 25+ messages in thread
From: Larry McVoy @ 2019-02-07 18:50 UTC (permalink / raw)
  To: Grant Taylor; +Cc: TUHS, COFF

On Thu, Feb 07, 2019 at 11:07:27AM -0700, Grant Taylor via TUHS wrote:
> On 02/06/2019 01:47 PM, Kevin Bowling wrote:
> >There were protocols that fit better in the era like DeltaT with a simpler
> >state machine and connection handling.?? Then there was a mad dash of
> >protocol development in the mid to late ???80s that were measured by
> >various metrics to outperform TCP in practical and theoretical space.??
> >Some of these seemed quite nice like XTP and are still in use in niche
> >defense applications.
> 
> $ReadingList++

Greg Chesson was the guy behind XTP.  I worked for him at SGI, he was a
hoot (lost him to cancer a while back).  I think I've posted this before
but here he is not long before he died (he came up to bitch about kids
these days with their shiney frameworks and new fangled languages, what's
wrong with C?)

http://mcvoy.com/lm/xtp+excavator

He was an engineer to the core, he refused to take any info from me on how
to run the machine, just sat there and figured it out.

--lm

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [TUHS] OSI stack (Was: Posters)
  2019-02-07 18:07   ` Grant Taylor via TUHS
@ 2019-02-07 18:22     ` Andy Kosela
  2019-02-07 18:50     ` Larry McVoy
  1 sibling, 0 replies; 25+ messages in thread
From: Andy Kosela @ 2019-02-07 18:22 UTC (permalink / raw)
  To: Grant Taylor; +Cc: TUHS, COFF

[-- Attachment #1: Type: text/plain, Size: 885 bytes --]

On Thursday, February 7, 2019, Grant Taylor via TUHS <tuhs@minnie.tuhs.org>
wrote:

> Seeing as how this is diverging from TUHS, I'd encourage replies to the
> COFF copy that I'm CCing.
>
> On 02/06/2019 01:47 PM, Kevin Bowling wrote:
>
>> There were protocols that fit better in the era like DeltaT with a
>> simpler state machine and connection handling.  Then there was a mad dash
>> of protocol development in the mid to late ‘80s that were measured by
>> various metrics to outperform TCP in practical and theoretical space.  Some
>> of these seemed quite nice like XTP and are still in use in niche defense
>> applications.
>>
>
> $ReadingList++


XTP was/is indeed very interesting.  It was adopted by US Navy for SAFENET
and created by Greg Chesson who was active in the early UNIX community.
Not sure if we have him here on this list though.

--Andy

[-- Attachment #2: Type: text/html, Size: 1210 bytes --]

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [TUHS] OSI stack (Was: Posters)
  2019-02-06 20:47 ` Kevin Bowling
@ 2019-02-07 18:07   ` Grant Taylor via TUHS
  2019-02-07 18:22     ` Andy Kosela
  2019-02-07 18:50     ` Larry McVoy
  0 siblings, 2 replies; 25+ messages in thread
From: Grant Taylor via TUHS @ 2019-02-07 18:07 UTC (permalink / raw)
  To: TUHS, COFF

[-- Attachment #1: Type: text/plain, Size: 3344 bytes --]

Seeing as how this is diverging from TUHS, I'd encourage replies to the 
COFF copy that I'm CCing.

On 02/06/2019 01:47 PM, Kevin Bowling wrote:
> There were protocols that fit better in the era like DeltaT with a 
> simpler state machine and connection handling.  Then there was a mad 
> dash of protocol development in the mid to late ‘80s that were measured 
> by various metrics to outperform TCP in practical and theoretical 
> space.  Some of these seemed quite nice like XTP and are still in use in 
> niche defense applications.

$ReadingList++

> Positive, rather than negative acknowledgement has aged well as 
> computers got more powerful (the sender can pretty well predict loss 
> when it hasn't gotten an ACK back and opportunistically retransmit).  
> But RF people do weird things that violate end to end principle on cable 
> modems and radios to try and minimize transmissions.

Would you care to elaborate?

> One thing I think that is missing in my contemporary modern software 
> developers is a willingness to dig down and solve problems at the right 
> level.  People do clownish things like write overlay filesystems in 
> garbage collected languages.  Google's QUIC is a fine example of 
> foolishness.  I am mortified that is genuinely being considered for the 
> HTTP 3 standard.  But I guess we've entered the era where enough people 
> have retired that the lower layers are approached with mysticism and 
> deemed unable and unsuitable to change.  So the layering will continue 
> until eventually things topple over like the garbage pile in the movie 
> Idiocracy.

I thought one of the reasons that QUIC was UDP based instead of it's own 
transport protocol was because history has shown that the possibility 
and openness of networking is not sufficient to encourage the adoption 
of newer technologies.  Specifically the long tail of history / legacy 
has hindered the introduction of a new transport protocol.  I thought I 
remembered hearing that Google wanted to do a new transport protocol, 
but they thought that too many things would block it thus slowing down 
it's deployment.  Conversely putting QUIC on top of UDP was a minor 
compromise that allowed the benefits to be adopted sooner.

Perhaps I'm misremembering.  I did a quick 45 second search and couldn't 
find any supporting evidence.

The only thing that comes to mind is IPsec's ESP(50) and AH(51) which—as 
I understand it—are filtered too frequently because they aren't ICMP(1), 
TCP(6), or UDP(17).  Too many firewalls interfere to the point that they 
are unreliable.

> Since the discussion meandered to the distinction of path 
> selection/routing.. for provider level networks, label switching to this 
> day makes a lot more sense IMO.. figure out a path virtual circuit that 
> can cut through each hop with a small flow table instead of trying to 
> coagulate, propagate routes from a massive address space that has to fit 
> in an expensive CAM and buffer and forward packets at each hop.

I think label switching has it's advantages.  I think it also has some 
complications.

I feel like ATM, Frame Relay, and MPLS are all forms of label switching. 
  Conceptually they all operate based on a per-programed path.



-- 
Grant. . . .
unix || die


[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4008 bytes --]

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [TUHS] OSI stack (Was: Posters)
@ 2019-02-07  1:03 Noel Chiappa
  0 siblings, 0 replies; 25+ messages in thread
From: Noel Chiappa @ 2019-02-07  1:03 UTC (permalink / raw)
  To: tuhs; +Cc: jnc

    > From: Kevin Bowling

    > t just doesn't mesh with what I understand

Ah, sorry, I misunderstood your point.

Anyway, this is getting a little far afield for TUHS, so at some point it
would be better to move to the 'internet-history' list if you want to explore
it in depth. But a few more...

    > Is it fair to say most of the non-gov systems were UNIX during the next
    > handful of years?

I assume you mean 'systems running TCP/IP'? If so, I really don't know,
because for a while during that approximate period one saw many internets
which weren't connected to the Internet. (Which is why the capitalization is
important, the ill-educated morons at the AP, etc notwithstanding.) I have no
good overall sense of that community, just anecdotal (plural is not 'data').

For the ones which _were_ connected to the Internet, then prior to the advent
of the DNS, inspection of the host table file(s) would give a census. After that,
I'm not sure - I seem to recall someone did some work on a census of Internet
machines, but I forget who/were.

If you meant 'systems in general' or 'systems with networking of some sort', alas
I have even less of an idea! :-)

	Noel

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [TUHS] OSI stack (Was: Posters)
@ 2019-02-07  0:45 Noel Chiappa
  0 siblings, 0 replies; 25+ messages in thread
From: Noel Chiappa @ 2019-02-07  0:45 UTC (permalink / raw)
  To: tuhs; +Cc: jnc

    > From: Larry McVoy

    > TCP/IP was the first wide spread networking stack that you could get
    > from a pile of different vendors, Sun, Dec, SGI, IBM's AIX, every kernel
    > supported it.

Well, not quite - X.25 was also available on just about everything. TCP/IP's
big advantage over X.25 was that it worked well with LAN's, whereas X.25 was
pretty specific to WAN's.

Although the wide range of TCP/IP implementations available, as well as the
multi-vendor support, and its not being tied to any one vendor, was a big
help. (Remember, I said the "_principle_ reason for TCP/IP's success"
[emphasis added] was the size of the community - other factors, such as these,
did play a role.)

The wide range of implementations was in part a result of DARPA's early
switch-over - every machine out there that was connected to the early Internet
(in the 80s) had to get a TCP/IP, and DARPA paid for a lot of them (e.g. the
BBN one for VAX Unix that Berkeley took on). The TOPS-20 one came from that
source, a whole bunch of others (many now extinct, but...).  MIT did one for
MS-DOS as soon as the IBM PC came out (1981), and that spun off to a business
(FTP Software) that was quite successful for a while (Windows 95 was, IIRC,
the first uSloth product with TCP/IP built in). Etc, etc.

    Noel

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [TUHS] OSI stack (Was: Posters)
  2019-02-07  0:02 Noel Chiappa
@ 2019-02-07  0:11 ` Kevin Bowling
  0 siblings, 0 replies; 25+ messages in thread
From: Kevin Bowling @ 2019-02-07  0:11 UTC (permalink / raw)
  To: Noel Chiappa; +Cc: The Eunuchs Hysterical Society

On Wed, Feb 6, 2019 at 5:03 PM Noel Chiappa <jnc@mercury.lcs.mit.edu> wrote:
>
>     > From: Kevin Bowling
>
>     > Seems like a case of winners write the history books.
>
> Hey, I'm just trying to pass on my best understanding as I saw it at the time,
> and in retrospect. If you're not interested, I'm happy to stop.

There's nothing personal.  It just doesn't mesh with what I understand
from non-UNIX first party sources in some mainframe, telco, and
networking books.  If I'm wrong I'll gladly update my opinion.  I
wasn't there.  I try to incorporate other sources outside UNIX into my
readings on computer history.  Maybe I see connections where there
were none, or they really were just parallel universes that didn't
influence each other.

>     > There were corporate and public access networks long before TCP was set
>     > in stone as a dominant protocol.
>
> Sure, there were lots of alternatives (BITNET, HEPNET, SPAN, CSNET, along with
> commercial systems like TYMNET and TELENET, along with a host of others whose
> names now escape me). And that's just the US; Europe had an alphabet soup of its
> own.
>
> But _very_ early on (1 Jan 1983), DARPA made all their fundees (which included
> all the top CS departments across the US) convert to TCP/IP. (NCP was turned
> off on the ARPANET,and everyone was forced to switch over, or get off the
> network.) A couple of other things went for TCP/IP too (e.g. NSF's
> super-computer network). A Federal ad hoc inter-departmental committee called
> the FRICC moved others (e.g. NASA and DoE) in the direction of TCP/IP,
> too.
>
> That's what created the large user community that eventually drove all the
> others out of business. (Metcalfe's Law.)

Is it fair to say most of the non-gov systems were UNIX during the
next handful of years?  I am asking for clarification, not a leading
question.

Regards,
Kevin

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [TUHS] OSI stack (Was: Posters)
  2019-02-06 23:52   ` Larry McVoy
@ 2019-02-07  0:04     ` Kevin Bowling
  0 siblings, 0 replies; 25+ messages in thread
From: Kevin Bowling @ 2019-02-07  0:04 UTC (permalink / raw)
  To: Larry McVoy; +Cc: The Eunuchs Hysterical Society, Noel Chiappa

On Wed, Feb 6, 2019 at 4:52 PM Larry McVoy <lm@mcvoy.com> wrote:
>
> On Wed, Feb 06, 2019 at 04:40:24PM -0700, Kevin Bowling wrote:
> > Seems like a case of winners write the history books.  There were
> > corporate and public access networks long before TCP was set in stone
> > as a dominant protocol.  Most F500 were interchanging on SNA into the
> > 1990s.  And access networks like Tymnet, etc to talk to others.
>
> Yeah, but those were all tied to some vendor.  TCP/IP was the first
> wide spread networking stack that you could get from a pile of different
> vendors, Sun, Dec, SGI, IBM's AIX, every kernel supported it.   System V
> was late to the party, Lachman bought the rights to Convergent's STREAMs
> based TCP/IP stack and had a tidy business for a while selling that stack
> to lots of places.  It was an awful stack, I know because I ported it to
> a super computer and then to SCO's UNIX.  When Sun switched to a System
> Vr4 kernel, they bought it in some crazy bad deal and tried to use it
> in Solaris.  That lead to the tcp latency benchmark in lmbench because
> Oracle's clustered database ran like shit on Slowaris and I traced it
> down to their distributed lock manager and then whittled that code
> down to lat_tcp.c.  Sun eventually dumped that stack and went with
> Mentat's code and tuned that back up.  Sun actually dumped the socket
> code and went just with STREAMs interfaces but was forced to bring
> back sockets.  Socket's aren't great but they won and there is no
> turning back.
>
> > TCP, coupled with the rise of UNIX and the free wheel sharing of BSD
> > code, are what made the people to talk to.
>
> BSD wasn't free until much later than TCP/IP.  TCP/IP was developed in
> the 1970's.
>
> http://www.securenet.net/members/shartley/history/tcp_ip.htm
>
> https://www.livinginternet.com/i/ii_tcpip.htm

This is where I am out of my depth, was it a "success" before the
mid-late '80s as some dominant force?  My reading is no outside
universities with UNIX/BSD.  The meteoric success came a bit later in
the '80s with NSFNET, the rise of Cisco, and then the internet
exchange culture in the early '90s CIX, PAIX, Metropolitan Fiber
Systems etc got us to today and off Bell System stuff.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [TUHS] OSI stack (Was: Posters)
@ 2019-02-07  0:02 Noel Chiappa
  2019-02-07  0:11 ` Kevin Bowling
  0 siblings, 1 reply; 25+ messages in thread
From: Noel Chiappa @ 2019-02-07  0:02 UTC (permalink / raw)
  To: tuhs; +Cc: jnc

    > From: Kevin Bowling

    > Seems like a case of winners write the history books.

Hey, I'm just trying to pass on my best understanding as I saw it at the time,
and in retrospect. If you're not interested, I'm happy to stop.

    > There were corporate and public access networks long before TCP was set
    > in stone as a dominant protocol.

Sure, there were lots of alternatives (BITNET, HEPNET, SPAN, CSNET, along with
commercial systems like TYMNET and TELENET, along with a host of others whose
names now escape me). And that's just the US; Europe had an alphabet soup of its
own.

But _very_ early on (1 Jan 1983), DARPA made all their fundees (which included
all the top CS departments across the US) convert to TCP/IP. (NCP was turned
off on the ARPANET,and everyone was forced to switch over, or get off the
network.) A couple of other things went for TCP/IP too (e.g. NSF's
super-computer network). A Federal ad hoc inter-departmental committee called
the FRICC moved others (e.g. NASA and DoE) in the direction of TCP/IP,
too.

That's what created the large user community that eventually drove all the
others out of business. (Metcalfe's Law.)

    Noel

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [TUHS] OSI stack (Was: Posters)
  2019-02-06 23:40 ` Kevin Bowling
@ 2019-02-06 23:52   ` Larry McVoy
  2019-02-07  0:04     ` Kevin Bowling
  0 siblings, 1 reply; 25+ messages in thread
From: Larry McVoy @ 2019-02-06 23:52 UTC (permalink / raw)
  To: Kevin Bowling; +Cc: The Eunuchs Hysterical Society, Noel Chiappa

On Wed, Feb 06, 2019 at 04:40:24PM -0700, Kevin Bowling wrote:
> Seems like a case of winners write the history books.  There were
> corporate and public access networks long before TCP was set in stone
> as a dominant protocol.  Most F500 were interchanging on SNA into the
> 1990s.  And access networks like Tymnet, etc to talk to others.

Yeah, but those were all tied to some vendor.  TCP/IP was the first
wide spread networking stack that you could get from a pile of different
vendors, Sun, Dec, SGI, IBM's AIX, every kernel supported it.   System V
was late to the party, Lachman bought the rights to Convergent's STREAMs
based TCP/IP stack and had a tidy business for a while selling that stack
to lots of places.  It was an awful stack, I know because I ported it to
a super computer and then to SCO's UNIX.  When Sun switched to a System
Vr4 kernel, they bought it in some crazy bad deal and tried to use it
in Solaris.  That lead to the tcp latency benchmark in lmbench because
Oracle's clustered database ran like shit on Slowaris and I traced it
down to their distributed lock manager and then whittled that code
down to lat_tcp.c.  Sun eventually dumped that stack and went with
Mentat's code and tuned that back up.  Sun actually dumped the socket
code and went just with STREAMs interfaces but was forced to bring
back sockets.  Socket's aren't great but they won and there is no
turning back.

> TCP, coupled with the rise of UNIX and the free wheel sharing of BSD
> code, are what made the people to talk to.

BSD wasn't free until much later than TCP/IP.  TCP/IP was developed in
the 1970's.

http://www.securenet.net/members/shartley/history/tcp_ip.htm

https://www.livinginternet.com/i/ii_tcpip.htm

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [TUHS] OSI stack (Was: Posters)
  2019-02-06 23:18 Noel Chiappa
@ 2019-02-06 23:40 ` Kevin Bowling
  2019-02-06 23:52   ` Larry McVoy
  0 siblings, 1 reply; 25+ messages in thread
From: Kevin Bowling @ 2019-02-06 23:40 UTC (permalink / raw)
  To: Noel Chiappa; +Cc: The Eunuchs Hysterical Society

Seems like a case of winners write the history books.  There were
corporate and public access networks long before TCP was set in stone
as a dominant protocol.  Most F500 were interchanging on SNA into the
1990s.  And access networks like Tymnet, etc to talk to others.

TCP, coupled with the rise of UNIX and the free wheel sharing of BSD
code, are what made the people to talk to.

On Wed, Feb 6, 2019 at 4:19 PM Noel Chiappa <jnc@mercury.lcs.mit.edu> wrote:
>
>     > From: Kevin Bowling
>
>     > I think TCP was a success because of BSD/UNIX rather than its own
>     > merits.
>
> Nope. The principle reason for TCP/IP's success was that it got there first,
> and established a user community first. That advantage then fed back, to
> increase the lead.
>
> Communication protocols aren't like editors/OS's/yadda-yadda. E.g. I use
> Epsilon - but the fact that few others do isn't a problem/issue for me. On the
> other hand, if I designed, implemented and personally adopted the world's best
> communication protocol... so what? There'd be nobody to talk to.
>
> That's just _one_ of the ways that communication systems are fundamentally
> different from other information systems.
>
>         Noel

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [TUHS] OSI stack (Was: Posters)
  2019-02-06 17:16   ` Warner Losh
  2019-02-06 17:23     ` Larry McVoy
@ 2019-02-06 23:37     ` George Michaelson
  1 sibling, 0 replies; 25+ messages in thread
From: George Michaelson @ 2019-02-06 23:37 UTC (permalink / raw)
  To: Warner Losh; +Cc: The Eunuchs Hysterical Society, Noel Chiappa

Alive and well in LDAP as a syntactic form. So, strongly alive in
functional systems worldwide, and in X.509 certificates.

As a typed entity in email addresses? NOPE.

-G

On Thu, Feb 7, 2019 at 3:17 AM Warner Losh <imp@bsdimp.com> wrote:
>
>
>
> On Mon, Feb 4, 2019 at 8:43 PM Warner Losh <imp@bsdimp.com> wrote:
>>
>>
>>
>> On Sun, Feb 3, 2019, 8:03 AM Noel Chiappa <jnc@mercury.lcs.mit.edu wrote:
>>>
>>>     > From: Warner Losh
>>>
>>>     > a bunch of OSI/ISO network stack posters (thank goodness that didn't
>>>     > become standard, woof!)
>>>
>>> Why?
>>
>>
>> Posters like this :). OSI was massively over specified...
>
>
> oops. Hit the list limit.
>
> Posters like this:
>
> https://people.freebsd.org/~imp/20190203_215836.jpg
>
> which show just how over-specified it was. I also worked at The Wollongong Group back in the early 90's and it was a total dog on the SysV 386 machines that we were trying to demo it on. A total and unbelievable PITA to set it up, and crappy performance once we got it going. Almost bad enough that we didn't show it at the trade show we were going to....  And that was just the lower layers of the stack plus basic name service. x.400 email addresses were also somewhat overly verbose. In many ways, it was a classic second system effect because they were trying to fix everything they thought was wrong with TCP/IP at the time without really, truly knowing the differences between actual problems and mere annoyances and how to properly weight the severity of the issue in coming up with their solutions.
>
> So x.400 vs smtp mail addresses: "G=Warner;S=Losh;O=WarnerLoshConsulting;PRMD=bsdimp;A=comcast;C=us" vis "imp@bsdimp.com"
>
> (assuming I got all the weird bits of the x.400 address right, it's been a long time and google had no good examples on the first page I could just steal...) The x.400 addresses were so unwieldy that a directory service was added on top of them x.500, which was every bit as baroque IIRC.
>
> TP4 might not have been that bad, but all the stuff above it was kinda crazy...
>
> Warner

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [TUHS] OSI stack (Was: Posters)
@ 2019-02-06 23:18 Noel Chiappa
  2019-02-06 23:40 ` Kevin Bowling
  0 siblings, 1 reply; 25+ messages in thread
From: Noel Chiappa @ 2019-02-06 23:18 UTC (permalink / raw)
  To: tuhs; +Cc: jnc

    > From: Kevin Bowling

    > I think TCP was a success because of BSD/UNIX rather than its own
    > merits.

Nope. The principle reason for TCP/IP's success was that it got there first,
and established a user community first. That advantage then fed back, to
increase the lead.

Communication protocols aren't like editors/OS's/yadda-yadda. E.g. I use
Epsilon - but the fact that few others do isn't a problem/issue for me. On the
other hand, if I designed, implemented and personally adopted the world's best
communication protocol... so what? There'd be nobody to talk to.

That's just _one_ of the ways that communication systems are fundamentally
different from other information systems.

	Noel

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [TUHS] OSI stack (Was: Posters)
  2019-02-06 17:49 Noel Chiappa
  2019-02-06 18:22 ` Paul Winalski
@ 2019-02-06 20:47 ` Kevin Bowling
  2019-02-07 18:07   ` Grant Taylor via TUHS
  1 sibling, 1 reply; 25+ messages in thread
From: Kevin Bowling @ 2019-02-06 20:47 UTC (permalink / raw)
  To: Noel Chiappa; +Cc: The Eunuchs Hysterical Society

[-- Attachment #1: Type: text/plain, Size: 3472 bytes --]

At risk of inciting some inflammation I think TCP was a success because of
BSD/UNIX rather than its own merits.  Moving the stack into the kernel,
therefore not requiring elaborate external communications controllers (like
IBM's VTAM, predecessors, and other vendor networking approaches that
existed since the '60s), and piggybacking on the success of UNIX and then
"open systems" cemented victory.  It has basically been made to work at
scale, the same way gasoline engines have been made to work in automobiles.

There were protocols that fit better in the era like DeltaT with a simpler
state machine and connection handling.  Then there was a mad dash of
protocol development in the mid to late ‘80s that were measured by various
metrics to outperform TCP in practical and theoretical space.  Some of
these seemed quite nice like XTP and are still in use in niche defense
applications.

Positive, rather than negative acknowledgement has aged well as computers
got more powerful (the sender can pretty well predict loss when it hasn't
gotten an ACK back and opportunistically retransmit).  But RF people do
weird things that violate end to end principle on cable modems and radios
to try and minimize transmissions.

One thing I think that is missing in my contemporary modern software
developers is a willingness to dig down and solve problems at the right
level.  People do clownish things like write overlay filesystems in garbage
collected languages.  Google's QUIC is a fine example of foolishness.  I am
mortified that is genuinely being considered for the HTTP 3 standard.  But
I guess we've entered the era where enough people have retired that the
lower layers are approached with mysticism and deemed unable and unsuitable
to change.  So the layering will continue until eventually things topple
over like the garbage pile in the movie Idiocracy.

Since the discussion meandered to the distinction of path
selection/routing.. for provider level networks, label switching to this
day makes a lot more sense IMO.. figure out a path virtual circuit that can
cut through each hop with a small flow table instead of trying to
coagulate, propagate routes from a massive address space that has to fit in
an expensive CAM and buffer and forward packets at each hop.

Regards,
Kevin

On Wed, Feb 6, 2019 at 10:49 AM Noel Chiappa <jnc@mercury.lcs.mit.edu>
wrote:

>     > On Wed, Feb 06, 2019 at 10:16:24AM -0700, Warner Losh wrote:
>
>     > In many ways, it was a classic second system effect because they were
>     > trying to fix everything they thought was wrong with TCP/IP at the
> time
>
> I'm not sure this part is accurate: the two efforts were contemporaneous;
> and
> my impression was they were trying to design the next step in networking,
> based
> on _their own_ analysis of what was needed.
>
>     > without really, truly knowing the differences between actual problems
>     > and mere annoyances and how to properly weight the severity of the
> issue
>     > in coming up with their solutions.
>
> This is I think true, but then again, TCP/IP fell into some of those holes
> too: fragmentation for one (although the issue there was unforseen
> problems in
> doing it, not so much in it not being a real issue), all the 'unused'
> fields
> in the IP and TCP headers for things that never got really got
> used/implemented (Type of Service, Urgent, etc).
>
> `        Noel
>

[-- Attachment #2: Type: text/html, Size: 4134 bytes --]

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [TUHS] OSI stack (Was: Posters)
  2019-02-06 17:49 Noel Chiappa
@ 2019-02-06 18:22 ` Paul Winalski
  2019-02-06 20:47 ` Kevin Bowling
  1 sibling, 0 replies; 25+ messages in thread
From: Paul Winalski @ 2019-02-06 18:22 UTC (permalink / raw)
  To: Noel Chiappa; +Cc: tuhs

On 2/6/19, Noel Chiappa <jnc@mercury.lcs.mit.edu> wrote:
>     > On Wed, Feb 06, 2019 at 10:16:24AM -0700, Warner Losh wrote:
>
>     > In many ways, it was a classic second system effect because they were
>     > trying to fix everything they thought was wrong with TCP/IP at the time
>
> I'm not sure this part is accurate: the two efforts were contemporaneous; and
> my impression was they were trying to design the next step in networking, based
> on _their own_ analysis of what was needed.

That's my recollection as well.  The OSI effort was dominated by the
European telcos, nearly all of which were government-run monopolies.
They were as much (if not more) interested in protecting their own
turf as in developing the next step in networking.  A lot of the
complexity came from the desire to be everything to everybody.  As is
often the case, the result was being nothing to nobody.

Phase V of DEC's networking product (DECnet) supported X.25 as an
alternative to DEC's proprietary transport/routing layer.  I had to
install this on one of our VAXen so we could test DECmail, our
forthcoming X.400 product.  I remember X.25 being excessively
complicated and a bear to set up compared to Phase IV DECnet (the
proprietary protocol stack).

-Paul W.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [TUHS] OSI stack (Was: Posters)
@ 2019-02-06 17:49 Noel Chiappa
  2019-02-06 18:22 ` Paul Winalski
  2019-02-06 20:47 ` Kevin Bowling
  0 siblings, 2 replies; 25+ messages in thread
From: Noel Chiappa @ 2019-02-06 17:49 UTC (permalink / raw)
  To: tuhs; +Cc: jnc

    > On Wed, Feb 06, 2019 at 10:16:24AM -0700, Warner Losh wrote:

    > In many ways, it was a classic second system effect because they were
    > trying to fix everything they thought was wrong with TCP/IP at the time

I'm not sure this part is accurate: the two efforts were contemporaneous; and
my impression was they were trying to design the next step in networking, based
on _their own_ analysis of what was needed.

    > without really, truly knowing the differences between actual problems
    > and mere annoyances and how to properly weight the severity of the issue
    > in coming up with their solutions.

This is I think true, but then again, TCP/IP fell into some of those holes
too: fragmentation for one (although the issue there was unforseen problems in
doing it, not so much in it not being a real issue), all the 'unused' fields
in the IP and TCP headers for things that never got really got
used/implemented (Type of Service, Urgent, etc).

`	 Noel

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [TUHS] OSI stack (Was: Posters)
  2019-02-06 17:16   ` Warner Losh
@ 2019-02-06 17:23     ` Larry McVoy
  2019-02-06 23:37     ` George Michaelson
  1 sibling, 0 replies; 25+ messages in thread
From: Larry McVoy @ 2019-02-06 17:23 UTC (permalink / raw)
  To: Warner Losh; +Cc: The Eunuchs Hysterical Society, Noel Chiappa

On Wed, Feb 06, 2019 at 10:16:24AM -0700, Warner Losh wrote:
> In many ways, it was a
> classic second system effect because they were trying to fix everything
> they thought was wrong with TCP/IP at the time without really, truly
> knowing the differences between actual problems and mere annoyances and how
> to properly weight the severity of the issue in coming up with their
> solutions.

Perfectly stated.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [TUHS] OSI stack (Was: Posters)
       [not found] ` <CANCZdfq5PM9jFEi9=geC+CTnveXs5CprN7b+ku+s+FYzw1yQBw@mail.gmail.com>
@ 2019-02-06 17:16   ` Warner Losh
  2019-02-06 17:23     ` Larry McVoy
  2019-02-06 23:37     ` George Michaelson
  0 siblings, 2 replies; 25+ messages in thread
From: Warner Losh @ 2019-02-06 17:16 UTC (permalink / raw)
  To: Noel Chiappa; +Cc: The Eunuchs Hysterical Society

[-- Attachment #1: Type: text/plain, Size: 1780 bytes --]

On Mon, Feb 4, 2019 at 8:43 PM Warner Losh <imp@bsdimp.com> wrote:

>
>
> On Sun, Feb 3, 2019, 8:03 AM Noel Chiappa <jnc@mercury.lcs.mit.edu wrote:
>
>>     > From: Warner Losh
>>
>>     > a bunch of OSI/ISO network stack posters (thank goodness that didn't
>>     > become standard, woof!)
>>
>> Why?
>>
>
> Posters like this :). OSI was massively over specified...
>

oops. Hit the list limit.

Posters like this:

https://people.freebsd.org/~imp/20190203_215836.jpg

which show just how over-specified it was. I also worked at The Wollongong
Group back in the early 90's and it was a total dog on the SysV 386
machines that we were trying to demo it on. A total and unbelievable PITA
to set it up, and crappy performance once we got it going. Almost bad
enough that we didn't show it at the trade show we were going to....  And
that was just the lower layers of the stack plus basic name service. x.400
email addresses were also somewhat overly verbose. In many ways, it was a
classic second system effect because they were trying to fix everything
they thought was wrong with TCP/IP at the time without really, truly
knowing the differences between actual problems and mere annoyances and how
to properly weight the severity of the issue in coming up with their
solutions.

So x.400 vs smtp mail addresses:
"G=Warner;S=Losh;O=WarnerLoshConsulting;PRMD=bsdimp;A=comcast;C=us" vis "
imp@bsdimp.com"

(assuming I got all the weird bits of the x.400 address right, it's been a
long time and google had no good examples on the first page I could just
steal...) The x.400 addresses were so unwieldy that a directory service was
added on top of them x.500, which was every bit as baroque IIRC.

TP4 might not have been that bad, but all the stuff above it was kinda
crazy...

Warner

[-- Attachment #2: Type: text/html, Size: 3362 bytes --]

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [TUHS] OSI stack (Was: Posters)
  2019-02-04 20:29 Noel Chiappa
  2019-02-04 21:13 ` Bakul Shah
@ 2019-02-05 18:01 ` Grant Taylor via TUHS
  1 sibling, 0 replies; 25+ messages in thread
From: Grant Taylor via TUHS @ 2019-02-05 18:01 UTC (permalink / raw)
  To: tuhs

[-- Attachment #1: Type: text/plain, Size: 3894 bytes --]

On 02/04/2019 01:29 PM, Noel Chiappa wrote:
> Does one name (in the generic high-level sense of the term 'name'; 
> e.g. an 'address' is a name for a unit of main memory) apply to the node 
> (host) no matter how many interfaces it has, or where it is/moves in 
> the network? If so, that name names the node. If not...

The vagaries of multi-homed hosts make the answer more complicated.

I generally think that a host name applies to the host, no matter how 
many interfaces it has, or where it is / moves in the network.  But 
that's the "host name".

There are other names, like service names, that apply to a service and 
can (re)pointed to any desired back end host name at any given time.

I usually think that host names should resolve to all IPs that are bound 
to a host.  I then rely on the DNS server to apply some optimization 
when possible about which IP is returned first based on the client's IP. 
  I also depend on clients preferring IPs in the directly attached 
network segment(s).

> No. Multi-homed hosts in IPv4 had multiple addresses. (There are all sorts 
> of kludges out there now, e.g. a single IP address shared by a pool of 
> servers, precisely because the set of entity classes in IPvN - hosts, 
> interfaces, etc - and namespaces for them were not rich enough for the 
> things that people actually wanted to do - e.g. have a pool of servers.)

Please allow me to clarify.

First, I'm excluding load balancers or similar techniques that spread an 
IP out across multiple back end servers.  Save for saying that I'd argue 
that such an IP belongs to the load balancer, not the back end server. 
That aside.

Multi-homed IPv4 hosts (all that I've tested) usually allow traffic to a 
local IP to come in on any interface, even if it's not the interface 
that the IPv4 address is bound to.

Take the following host:

+---+   +-----------------+   +---+
| A +---+ eth0   B   eth1 +---+ C |
+---+   +-----------------+   +---+

Even if B has IPv4 forwarding disabled, A will very likely be able to 
talk to B via the IPv4 address bound to eth1.  Likewise C can talk to B 
using the IPv4 address bound to eth0.

My understanding is that IPv6 changes this paredigm to be explicitly the 
opposite.  If B has IPv6 forwarding disabled, A can't talk to B via the 
IPv6 address bound to eth1.  Nor can C talk to B via the IPv6 address 
bound to eth0.

That's why I was saying that the IPv4 addresses on eth0 and eth1 
belonged to the OS running on B, not the specific interfaces.

> Ignore what term(s) anyone uses, and apply the 'quack/walk' test - 
> how is it used, and what can it do?

I will have to test this when time permits.

But the above matches how I remember the duck quacking and walking.

> Names (in the generic sense above) used by the path selection mechanism 
> (routing protocols do path selection).
> 
> They aren't. But the path selection system can't aggregate information 
> (e.g.  routes) about multiple connected entities into a single item (to 
> make the path selection scale, in a large network like the Internet) 
> if the names the path selection system uses for them (i.e. addresses, 
> NSAP's, whatever) are allocated by several different naming authorities, 
> and thus bear no relationship to one another.
> 
> E.g. if my house's street address is 123 North Street, and the house next 
> door's address is 456 South Street, and 124 North Street is on the other 
> side of town, maps (i.e. the data used by a path selection algorithm to 
> decide how to get from A to B in the road network) aren't going to be 
> very compact.

Agreed.  The number of routes / prefixes and the paths to get to them 
has been exploding for quite a while now.  I think the IPv4 Default Free 
Zone is approaching 800,000 routes / paths.



-- 
Grant. . . .
unix || die


[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4008 bytes --]

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [TUHS] OSI stack (Was: Posters)
  2019-02-04 21:13 ` Bakul Shah
@ 2019-02-04 21:34   ` Clem Cole
  0 siblings, 0 replies; 25+ messages in thread
From: Clem Cole @ 2019-02-04 21:34 UTC (permalink / raw)
  To: Bakul Shah; +Cc: TUHS main list

[-- Attachment #1: Type: text/plain, Size: 1044 bytes --]

On Mon, Feb 4, 2019 at 4:23 PM Bakul Shah <bakul@bitblocks.com> wrote:

> On Feb 4, 2019, at 12:29 PM, Noel Chiappa <jnc@mercury.lcs.mit.edu> wrote:
> >
> >> From: Grant Taylor
> >
> >> I'm not quite sure what you mean by naming a node vs network interface.
> >
> > Does one name (in the generic high-level sense of the term 'name'; e.g.
> an
> > 'address' is a name for a unit of main memory) apply to the node (host)
> no
> > matter how many interfaces it has, or where it is/moves in the network?
> If so,
> > that name names the node. If not...
>
> Xerox Network System (XNS) also had this property. A node had a
> unique 48 bit address and each network had a unique 32 bit address.
> Absolute host numbering meant you could move a host to another
> network easily. IDP packet layout
>

Yeah, at the time, MetCalfe had the advantage of seeing what NCP and IP had
already learned.   We used to say in hindsight, Ethernet had too many
address bits and IP not enough.    So Noel and team invented ARP ;-)

Clem
ᐧ

[-- Attachment #2: Type: text/html, Size: 2089 bytes --]

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [TUHS] OSI stack (Was: Posters)
  2019-02-04 20:29 Noel Chiappa
@ 2019-02-04 21:13 ` Bakul Shah
  2019-02-04 21:34   ` Clem Cole
  2019-02-05 18:01 ` Grant Taylor via TUHS
  1 sibling, 1 reply; 25+ messages in thread
From: Bakul Shah @ 2019-02-04 21:13 UTC (permalink / raw)
  To: TUHS main list

On Feb 4, 2019, at 12:29 PM, Noel Chiappa <jnc@mercury.lcs.mit.edu> wrote:
> 
>> From: Grant Taylor
> 
>> I'm not quite sure what you mean by naming a node vs network interface.
> 
> Does one name (in the generic high-level sense of the term 'name'; e.g. an
> 'address' is a name for a unit of main memory) apply to the node (host) no
> matter how many interfaces it has, or where it is/moves in the network? If so,
> that name names the node. If not...

Xerox Network System (XNS) also had this property. A node had a
unique 48 bit address and each network had a unique 32 bit address.
Absolute host numbering meant you could move a host to another
network easily. IDP packet layout:


{checksum(2), length(2),
 transport ctl(1), packet type(1),
 dst net(4), dst host(6), dst socket(2),
 src net(4), src host(6), src socket(2),
 transport data(0 to 456), pad(0 to 1 byte)
}

IDP is @layer 3, same as IPv4 or IPv6.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [TUHS] OSI stack (Was: Posters)
@ 2019-02-04 20:29 Noel Chiappa
  2019-02-04 21:13 ` Bakul Shah
  2019-02-05 18:01 ` Grant Taylor via TUHS
  0 siblings, 2 replies; 25+ messages in thread
From: Noel Chiappa @ 2019-02-04 20:29 UTC (permalink / raw)
  To: tuhs; +Cc: jnc

    > From: Grant Taylor

    > I'm not quite sure what you mean by naming a node vs network interface.

Does one name (in the generic high-level sense of the term 'name'; e.g. an
'address' is a name for a unit of main memory) apply to the node (host) no
matter how many interfaces it has, or where it is/moves in the network? If so,
that name names the node. If not...

    > But I do know for a fact that in IPv4, IP addresses belonged to the
    > system.

No. Multi-homed hosts in IPv4 had multiple addresses. (There are all sorts
of kludges out there now, e.g. a single IP address shared by a pool of
servers, precisely because the set of entity classes in IPvN - hosts,
interfaces, etc - and namespaces for them were not rich enough for the
things that people actually wanted to do - e.g. have a pool of servers.)

Ignore what term(s) anyone uses, and apply the 'quack/walk' test - how is
it used, and what can it do?

    > I don't understand what you mean by using "names" for "path selection".

Names (in the generic sense above) used by the path selection mechanism
(routing protocols do path selection).

    > That's probably why I don't understand how routes are allocated by a
    > naming authority.

They aren't. But the path selection system can't aggregate information (e.g.
routes) about multiple connected entities into a single item (to make the path
selection scale, in a large network like the Internet) if the names the path
selection system uses for them (i.e. addresses, NSAP's, whatever) are
allocated by several different naming authorities, and thus bear no
relationship to one another.

E.g. if my house's street address is 123 North Street, and the house next door's
address is 456 South Street, and 124 North Street is on the other side of town,
maps (i.e. the data used by a path selection algorithm to decide how to get from
A to B in the road network) aren't going to be very compact.

	Noel

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [TUHS] OSI stack (Was: Posters)
  2019-02-03 15:02 Noel Chiappa
@ 2019-02-03 16:51 ` Grant Taylor via TUHS
       [not found] ` <CANCZdfq5PM9jFEi9=geC+CTnveXs5CprN7b+ku+s+FYzw1yQBw@mail.gmail.com>
  1 sibling, 0 replies; 25+ messages in thread
From: Grant Taylor via TUHS @ 2019-02-03 16:51 UTC (permalink / raw)
  To: tuhs

[-- Attachment #1: Type: text/plain, Size: 2204 bytes --]

On 2/3/19 8:02 AM, Noel Chiappa wrote:
> Why? The details have faded from my memory, but the lower 2 layers of 
> the stack (CLNP and TP4) I don't recall as being too bad. (The real 
> block to adoption was that people didn't want to get snarled up in the 
> ISO standards process.)

I too am curious.  I've got no first hand experience.  (I'm not counting 
getting a couple of SimH VAXen talking to each other over DECnetIV.)

> It at least managed (IIRC) to separate the concepts of, and naming 
> for, 'node' and 'network interface' (which is more than IPv6 managed, 
> apparently on the grounds that 'IPv4 did it that way', despite lengthy 
> pleading that in light of increased understanding since IPv4 was done, 
> they were separate concepts and deserved separate namespaces).

I'm not quite sure what you mean by naming a node vs network interface.

But I do know for a fact that in IPv4, IP addresses belonged to the 
system.  Conversely, in IPv6, IP addresses belong to the interface. 
This has important security implications on multi-homed systems.

> Yes, the allocation of the names used by the path selection (I use that 
> term because to too many people, 'routing' means 'packet forwarding') 
> was a total dog's breakast (allocation by naming authority - the very 
> definition of 'brain-damaged') but TCP/IP's was not any better, really.

I don't understand what you mean by using "names" for "path selection". 
Or are you referring to named networks, and that traffic must pass 
through a (named) network?

That's probably why I don't understand how routes are allocated by a 
naming authority.

> Yes, the whole session/presentation/application thing was ponderous and 
> probably over-complicated, but that could have been ditched and simpler 
> things run directly on TP4.

I've seen various parts of session and / or presentation applied to IPv4 
(and presumably IPv6) applications.  Some people like to say that 
session is one of those two (arguments ensue as to which) grafted on top 
of the application layer.  So, even that's still there in the IP world, 
just in an arguably different order.



-- 
Grant. . . .
unix || die


[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4008 bytes --]

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [TUHS] OSI stack (Was: Posters)
@ 2019-02-03 15:02 Noel Chiappa
  2019-02-03 16:51 ` Grant Taylor via TUHS
       [not found] ` <CANCZdfq5PM9jFEi9=geC+CTnveXs5CprN7b+ku+s+FYzw1yQBw@mail.gmail.com>
  0 siblings, 2 replies; 25+ messages in thread
From: Noel Chiappa @ 2019-02-03 15:02 UTC (permalink / raw)
  To: tuhs; +Cc: jnc

    > From: Warner Losh

    > a bunch of OSI/ISO network stack posters (thank goodness that didn't
    > become standard, woof!)

Why? The details have faded from my memory, but the lower 2 layers of the
stack (CLNP and TP4) I don't recall as being too bad. (The real block to
adoption was that people didn't want to get snarled up in the ISO standards
process.)

It at least managed (IIRC) to separate the concepts of, and naming for, 'node'
and 'network interface' (which is more than IPv6 managed, apparently on the
grounds that 'IPv4 did it that way', despite lengthy pleading that in light of
increased understanding since IPv4 was done, they were separate concepts and
deserved separate namespaces). Yes, the allocation of the names used by the
path selection (I use that term because to too many people, 'routing' means
'packet forwarding') was a total dog's breakast (allocation by naming
authority - the very definition of 'brain-damaged') but TCP/IP's was not any
better, really.

Yes, the whole session/presentation/application thing was ponderous and probably
over-complicated, but that could have been ditched and simpler things run
directly on TP4.

{And apologies for the non-Unix content, but at least it's about computers,
unlike all the postings about Jimmy Page's guitar; typical of the really poor
S/N on this list.)

    Noel

^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2019-02-07 19:04 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-03 18:49 [TUHS] OSI stack (Was: Posters) Norman Wilson
  -- strict thread matches above, loose matches on Subject: below --
2019-02-07 19:04 Noel Chiappa
2019-02-07  1:03 Noel Chiappa
2019-02-07  0:45 Noel Chiappa
2019-02-07  0:02 Noel Chiappa
2019-02-07  0:11 ` Kevin Bowling
2019-02-06 23:18 Noel Chiappa
2019-02-06 23:40 ` Kevin Bowling
2019-02-06 23:52   ` Larry McVoy
2019-02-07  0:04     ` Kevin Bowling
2019-02-06 17:49 Noel Chiappa
2019-02-06 18:22 ` Paul Winalski
2019-02-06 20:47 ` Kevin Bowling
2019-02-07 18:07   ` Grant Taylor via TUHS
2019-02-07 18:22     ` Andy Kosela
2019-02-07 18:50     ` Larry McVoy
2019-02-04 20:29 Noel Chiappa
2019-02-04 21:13 ` Bakul Shah
2019-02-04 21:34   ` Clem Cole
2019-02-05 18:01 ` Grant Taylor via TUHS
2019-02-03 15:02 Noel Chiappa
2019-02-03 16:51 ` Grant Taylor via TUHS
     [not found] ` <CANCZdfq5PM9jFEi9=geC+CTnveXs5CprN7b+ku+s+FYzw1yQBw@mail.gmail.com>
2019-02-06 17:16   ` Warner Losh
2019-02-06 17:23     ` Larry McVoy
2019-02-06 23:37     ` George Michaelson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).