The Unix Heritage Society mailing list
 help / color / mirror / Atom feed
* [TUHS] Sockets and the true UNIX
@ 2017-09-25 17:57 Norman Wilson
  2017-09-25 18:55 ` Clem Cole
  0 siblings, 1 reply; 31+ messages in thread
From: Norman Wilson @ 2017-09-25 17:57 UTC (permalink / raw)


Clem Cole:

  It was never designed for it.  dmr designed Streams to replace the
  tty handler.   I never understood  why the Summit guys insisted on
  forcing networking into it.

======

You've mistaken.  The point of the stream I/O setup with
stackable line disciplines, rather than the old single
line-discipline switch, was specifically to support networking
as well as tty processing.

Serial-device drivers in V7 used a single line-discipline
driver, used variously for canonical-tty handling and for
network protocols.  The standard system as used outside
the labs had only one line discipline configured, with
standard tty handling (see usr/sys/conf/c.c).  There were
driver source files for what I think were internal-use-only
networks (dev/pk[12].c, perhaps), but I don't think they
were used outside AT&T.

The problem Dennis wanted to solve was that tty handling
and network protocol handling interfered with one another;
you couldn't ask the kernel to do both, because there was
only one line discipline at a time.  Hence the stackable
modules.  It was possible to duplicate tty handling (probably
by placing calls to the regular tty line discipline's innards)
within the network-protocol code, but that was messy.  It also
ran into trouble when people wanted to use the C shell, which
expected its own special `new tty' line discipline, so the
network code would have to know which tty driver to call.
It made more sense to stack the modules instead, so the tty
code was there only if it was needed, and different tty
drivers could exist without the network code knowing or caring.

When I arrived at the Labs in 1984, the streams code was in
use daily by most of us in 1127.  The terminals on our desks
were plugged into serial ports on Datakit (like what we call
a terminal server now).  I would turn on my terminal in the
morning, tell the prompt which system I wanted to connect to,
and so far as I could tell I had a direct serial connection.
But in the remote host, my shell talked to an instance of the
tty line module, which exchanged data with a Datakit protocol
module, which exchanged data with the low-level Datakit driver.
If I switched to the C shell (I didn't but some did), csh would
pop off the tty module and push on the newtty module, and the
network code was none the wiser.

Later there was a TCP/IP that used the stream mechanism.  The
first version was shoehorned in by Robert T Morris, who worked
as a summer intern for us; it was later cleaned up considerably
by Paul Glick.  It's more complicated because of all the
multiplexers involved (Ethernet packets split up by protocol
number; IP packets divided by their own protocol number;
TCP packets into sessions), but it worked.  I still use it at
home.  Its major flaw is that details of the original stream
implementation make it messy to handle windows of more than
4096 bytes; there are also some quirks involving data left in
the pipe when a connection closes, something Dennis's code
doesn't handle well.

The much-messier STREAMS that came out of the official System
V people had fixes for some of that, but at the cost of quite
a bit more complexity; it could probably be done rather better.
At one point I wanted to have a go at it, but I've never had
the time, and now I doubt I ever will.

One demonstration of virtue, though: although Datakit was the
workhorse network in Research when I was there (and despite
the common bias against virtual circuits it worked pretty well;
the major drawback was that although the underlying Datakit
fabric could run at multiple megabits per second, we never had
a host interface that could reliably run at even a single megabit),
we did once arrange to run TCP/IP over a Datakit connection.
It was very simple in concept: make a Datakit connection (so the
Datakit protocol module is present); push an IP instance onto
that stream; and off you go.

I did something similar in my home V10 world when quickly writing
my own implementation of PPP from the specs many years ago.
The core of that code is still in use in my home-written PPPoE code.
PPP and PPPoE are all outside the kernel; the user-mode program
reads and writes the serial device (PPP) or an Ethernet instance
that returns just the desired protocol types (PPPoE), does the
PPP processing, and reads and writes IP packets to a (full-duplex
stream) pipe on the other end of which is pushed the IP module.

All this is very different from the socket(2) way of thinking,
and it has its vices, but it also has its virtues.

Norman Wilson
Toronto ON



^ permalink raw reply	[flat|nested] 31+ messages in thread

* [TUHS] Sockets and the true UNIX
  2017-09-25 17:57 [TUHS] Sockets and the true UNIX Norman Wilson
@ 2017-09-25 18:55 ` Clem Cole
  0 siblings, 0 replies; 31+ messages in thread
From: Clem Cole @ 2017-09-25 18:55 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 4071 bytes --]

Norman - thanks - fair enough, but I stand by my original statement.

Greg, Dennis, Presetto, Sam and couple of others spent an evening
discussing it all at USENIX one night; early before Dennis has finished the
original implementation.    I'm very much sure, that while it was used for
networking, the TTY handler and low speed I/O was the driver.

On Mon, Sep 25, 2017 at 1:57 PM, Norman Wilson <norman at oclsc.org> wrote:

> Clem Cole:
>
>   It was never designed for it.  dmr designed Streams to replace the
>   tty handler.   I never understood  why the Summit guys insisted on
>   forcing networking into it.
>
> ======
>
> You've mistaken.

​Maybe -- its possible, but as I said, I certainly talked to Dennis and
Greg about this at length.​  It was a hot topic a number of times.   Greg
could get sort of animated ;-)




>   The point of the stream I/O setup with
> stackable line disciplines, rather than the old single
> line-discipline switch, was specifically to support networking
> as well as tty processing.
>

TTY, I 100% agree - this was exactly the problem that was trying to be
solved. - being able to handle networking was a side effect.

>
> ​...
>
>
> The problem Dennis wanted to solve was that tty handling
> and network protocol handling interfered with one another;
> you couldn't ask the kernel to do both, because there was
> only one line discipline at a time.

​This was an issue, but  don't think this was as important as...
the desire for stackable so we could mix and match line disc​iplines period.




> Hence the stackable
> ​ ​
> modules.

​Right ...  that was a driver ....​



> ​...
>  which
> expected its own special `new tty' line discipline, so the
> network code would have to know which tty driver to call.
> It made more sense to stack the modules instead, so the tty
> code was there only if it was needed, and different tty
> drivers could exist without the network code knowing or caring.
>
​Exactly....​



>
> When I arrived at the Labs in 1984, the streams code was in
> use daily by most of us in 1127.  The terminals on our desks
> were plugged into serial ports on Datakit (like what we call
> a terminal server now).

​Right... exactly what Greg and Dennis were trying to fix.​

How the top half of the tty handler had various needs and how to
efficiently implement it.
Some tty handlers had become quite sophisticated... UCB's line discipline
was just the start.

For instance, Steve Zimmerman did a version that was sort of combination of
UCB, ITS and TENEX that Masscomp would pick up (which I sometimes miss).
IIRC the argument, Greg's position was that we should be able to create
something that was user providable ultimately, that could make things as
fancy or as plain as an application needed.

Instead of doing stty/gtty calls to turn on cbreak or the like, an
application like emac/vi or whatever could ask for a very simple path
[Domain referred to the as 'borrowing' the path IIRC].   Other programs,
say the ed and Bourne shell might want something fairly fancy under the
covers.

The stacking scheme was so that we could easily define how chars would be
handled and which layer would handle them.
The problem, as you point out, was the transitions between different
'modes' - which the original line discipline code had trouble with and
stacking and unstacking only made worse.   But I digress.




>
> Later there was a TCP/IP that used the stream mechanism.

​Yup....   but was an afterthought .... look what I can do.
My point, it was a secondary effect... it was there and worked.



> ​....​
>
> All this is very different from the socket(2) way of thinking,
> and it has its vices, but it also has its virtues.
>

And yes, sockets had not (yet) completely taken over.
These were some of the experiments to see what would happen.

Now, what caused it to fail as a networking interface?

Clem
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20170925/3ff51944/attachment.html>


^ permalink raw reply	[flat|nested] 31+ messages in thread

* [TUHS] Sockets and the true UNIX
  2017-09-22 20:57               ` Chris Torek
@ 2017-09-24 18:04                 ` Chet Ramey
  0 siblings, 0 replies; 31+ messages in thread
From: Chet Ramey @ 2017-09-24 18:04 UTC (permalink / raw)


On 9/22/17 4:57 PM, Chris Torek wrote:
>> In that sense, job control and process groups *are* independent of
>> the terminal device.
>>
>> The complication arises when you have to multiplex one terminal
>> between several different jobs in an interactive shell. Setting
>> the terminal process group correctly -- the only way to accomplish
>> that -- is fragile and the consequences of setting it to the wrong
>> value are inconvenient, to say the least.
> 
> Sure, but here again you can have the shell hang on to the
> (kernel allocated) pgroup, and any control operation amounts
> to, e.g., "set tty pgroup to the one for my pgroupid X".
> Since the shell is waiting for everything in the pipeline
> to exit, the shell knows when to close its "pgid descriptor".

That's not the problem this solves. The problem with allocating
process groups in the shell (and making them equal to the process id
of the pgrp leader) is that you end up with a race condition when
trying to add subsequent processes in a pipeline to that pgrp.
It forces you to artifically prevent the process group leader from
running or exiting until you're finished setting up the entire
pipeline.

The shell has to set the terminal process group to arbitrary process
groups, not just its own: that's how `fg' has to work. For that purpose,
tcsetpgrp is fine. You just have to make sure that you only call it
when the shell is in the foreground.

As for other processes, I'm not sure exactly what you mean by `control
operation', but you surely would want a background process to get stopped
if it tries to set the terminal attributes.


> (If you abandon the pipeline entirely, you also abandon the
> ability to attach it to a tty.  There should be some way to
> "rejoin" pgroups, which would let you attach them to new tty
> sessions, which is something that is missing from the current
> system.  But that's another thing entirely.)

This is something that would be nice to have, but there are security
considerations that make it tricky. Not unsolvable, but tricky to get
right.

Chet
-- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
		 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRU    chet at case.edu    http://cnswww.cns.cwru.edu/~chet/



^ permalink raw reply	[flat|nested] 31+ messages in thread

* [TUHS] Sockets and the true UNIX
  2017-09-22 19:08             ` Chet Ramey
@ 2017-09-22 20:57               ` Chris Torek
  2017-09-24 18:04                 ` Chet Ramey
  0 siblings, 1 reply; 31+ messages in thread
From: Chris Torek @ 2017-09-22 20:57 UTC (permalink / raw)


>In that sense, job control and process groups *are* independent of
>the terminal device.
>
>The complication arises when you have to multiplex one terminal
>between several different jobs in an interactive shell. Setting
>the terminal process group correctly -- the only way to accomplish
>that -- is fragile and the consequences of setting it to the wrong
>value are inconvenient, to say the least.

Sure, but here again you can have the shell hang on to the
(kernel allocated) pgroup, and any control operation amounts
to, e.g., "set tty pgroup to the one for my pgroupid X".
Since the shell is waiting for everything in the pipeline
to exit, the shell knows when to close its "pgid descriptor".

(If you abandon the pipeline entirely, you also abandon the
ability to attach it to a tty.  There should be some way to
"rejoin" pgroups, which would let you attach them to new tty
sessions, which is something that is missing from the current
system.  But that's another thing entirely.)

Chris



^ permalink raw reply	[flat|nested] 31+ messages in thread

* [TUHS] Sockets and the true UNIX
  2017-09-22 18:14           ` Chris Torek
  2017-09-22 18:43             ` Clem Cole
@ 2017-09-22 19:08             ` Chet Ramey
  2017-09-22 20:57               ` Chris Torek
  1 sibling, 1 reply; 31+ messages in thread
From: Chet Ramey @ 2017-09-22 19:08 UTC (permalink / raw)


On 9/22/17 2:14 PM, Chris Torek wrote:

> Job control ought to be independent of tty devices: each shell
> pipeline is a "job", and the shell should group them by having the
> kernel create an initial empty(ish -- the shell is a member of
> both its original pgroup and this one it just created, until it
> detaches from it) job, then pushing each process it forks into
> that pgroup.

In that sense, job control and process groups *are* independent of
the terminal device.

The complication arises when you have to multiplex one terminal
between several different jobs in an interactive shell. Setting
the terminal process group correctly -- the only way to accomplish
that -- is fragile and the consequences of setting it to the wrong
value are inconvenient, to say the least.

Chet
-- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
		 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRU    chet at case.edu    http://cnswww.cns.cwru.edu/~chet/



^ permalink raw reply	[flat|nested] 31+ messages in thread

* [TUHS] Sockets and the true UNIX
  2017-09-22 18:14           ` Chris Torek
@ 2017-09-22 18:43             ` Clem Cole
  2017-09-22 19:08             ` Chet Ramey
  1 sibling, 0 replies; 31+ messages in thread
From: Clem Cole @ 2017-09-22 18:43 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 1219 bytes --]

On Fri, Sep 22, 2017 at 2:14 PM, Chris Torek <torek at torek.net> wrote:

> Kulp's name (and IIASA) appeared in several source files,


​Which is exactly my point....​As Larry said -- "*Huh, I too thought Bill
Joy did it.  Did he integrate it and get credit that way?"*

Most people, even really smart ones that know a lot of the history like
Larry, think job control came from BSD.   Bill recognized a good idea (and
I believe it came in on the PDP-11 version from IIASA and MIT BTW) and then
peed on it as it was made part of 4.1BSD.

To his credit, he was amazing at recognizing some great ideas and
integrating them; and some of them make the system much more usable and I
could not live without (I really would not want to type day-2-day to V6 or
V7 - which I did years ago and love it ->> then <<).

The core BSD system is the basic UNIX that the ROMs in my fingers and my
brain expect when I'm typing.  Bill was the person that shepherded it and I
thank him.    But it was a work of a lot people, but at UCB (and not just
CSRG) and outside.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20170922/0f813afc/attachment.html>


^ permalink raw reply	[flat|nested] 31+ messages in thread

* [TUHS] Sockets and the true UNIX
  2017-09-22 14:57         ` Chet Ramey
@ 2017-09-22 18:14           ` Chris Torek
  2017-09-22 18:43             ` Clem Cole
  2017-09-22 19:08             ` Chet Ramey
  0 siblings, 2 replies; 31+ messages in thread
From: Chris Torek @ 2017-09-22 18:14 UTC (permalink / raw)


>Kulp did the signal work in the kernel and added the code to the c shell
>to use it (or had Eric Cooper, who was working for him as a student at
>the time do fg/bg). He did his work on 3BSD and set the mods to Berkeley,
>where Joy took them and integrated them into 4.1 BSD.

Kulp's name (and IIASA) appeared in several source files, I think;
I know I came across them.

Job control is a little overly complicated but there's only one
really big mistake in it, in my opinion, and that is that the
process group IDs are not allocated in the kernel.  This is what
led to all the "session group leader" cruft in POSIX.

Job control ought to be independent of tty devices: each shell
pipeline is a "job", and the shell should group them by having the
kernel create an initial empty(ish -- the shell is a member of
both its original pgroup and this one it just created, until it
detaches from it) job, then pushing each process it forks into
that pgroup.

I'll use "fd" as a name here since I think that's one way to
return them (they're refcounted like file descriptors) but not
the only way, and it probably makes more sense to return a
pgroup ID directly and have a pgclose()...

    // in a shell:
    fd = new_pgroup(); // kernel allocates pgroup

    for (all commands in a pipeline) {
        pid = fork();
        if (pid == 0) {
            // child
            pid = getpid();
            setpgrp(pid, fd);
            ... all the other stuff we do ...
            exec(childprocess)
        }
        if (pid < 0) ... clean up ...
        // still in the shell - set up the rest of the pipe using
        // the same pgroup
    }
    close(fd); // we're done with the pgroup, it goes away
    // when the last pid we created above also exits

Note that this would mean that anything that creates login sessions
would have to have this same kind of code (create a pgroup, fork,
push child into pgroup, exec shell).  But that's already true for
job control in general: init or getty needs to be job-control-aware,
and this makes their code simpler.

Chris



^ permalink raw reply	[flat|nested] 31+ messages in thread

* [TUHS] Sockets and the true UNIX
  2017-09-22 10:36 ` Paul Ruizendaal
  2017-09-22 14:32   ` Clem Cole
@ 2017-09-22 18:00   ` Chris Torek
  1 sibling, 0 replies; 31+ messages in thread
From: Chris Torek @ 2017-09-22 18:00 UTC (permalink / raw)


[I said]
>> This also knocks out the need for SO_REUSEADDR ...

>Under TCP/IP I'm not sure you can. The protocol specifies that you must
>wait for a certain period of time (120 sec, if memory serves my right)
>before reusing an address/port combo, so that all in-flight packets have
>disappeared from the network. Only if one is sure that this is not an
>issue one can use SO_REUSEADDR.

That's independent of this issue.  The problem is that when you
call bind(), the system does not know if you are doing it as
the first half of bind()/connect(), or as the first half of
bind()/listen( (followed by an accept() loop).

If you are going to do the latter, you are declaring that you want
to serve *new* SYN requests.  This is when you (currently) must set
SO_REUSEADDR, because bind() checks your local address against the
universe of all local/remote pairs.  But if the system knows that
your call (which, remember, isn't bind() anymore) is your
declaration of intent to be a server, the OS can just check
whether there's already someone declared as "the server".

If you intend to "serve" one specific foreign address (FTP):

   bind_and_connect(fd, local, remote);

the test is: "does <local,remote> specify a unique address"
(and possibly "does <local> match what we already suggested
to you and reserved for you).  If you intend to serve "*":

   bind_and_connect(fd, local, NULL);

the test is: "is there already a <local,*>".

In the server case any existing 2MSL timeouts are irrelevant;
what matters is whether you are competing with another server.

(Note that libc will probably have various convenience wrappers
atop the system call, kind of like libc has system() atop
fork/exec.  Most programs just want to create and bind_and_connect
a socket in one pass.)

Chris



^ permalink raw reply	[flat|nested] 31+ messages in thread

* [TUHS] Sockets and the true UNIX
  2017-09-22 14:49       ` Larry McVoy
@ 2017-09-22 14:57         ` Chet Ramey
  2017-09-22 18:14           ` Chris Torek
  0 siblings, 1 reply; 31+ messages in thread
From: Chet Ramey @ 2017-09-22 14:57 UTC (permalink / raw)


On 9/22/17 10:49 AM, Larry McVoy wrote:
> On Fri, Sep 22, 2017 at 10:42:19AM -0400, Chet Ramey wrote:
>> On 9/22/17 10:32 AM, Clem Cole wrote:
>>
>>> (describing MIT's job control that Joy swiped
>>> and added to BSD), 
>>
>> The original concepts might have come from MIT, but the BSD implementation
>> was done by Jim Kulp at IIASA before it was folded into 4.1 BSD.
> 
> Huh, I too thought Bill Joy did it.  Did he integrate it and get credit 
> that way?

Kulp did the signal work in the kernel and added the code to the c shell
to use it (or had Eric Cooper, who was working for him as a student at
the time do fg/bg). He did his work on 3BSD and set the mods to Berkeley,
where Joy took them and integrated them into 4.1 BSD.

As far as I can tell, Kulp has always gotten credit for doing the initial
work, Joy for the `standard' BSD integration, and David Korn for doing
the second (non-csh) implementation.

-- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
		 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRU    chet at case.edu    http://cnswww.cns.cwru.edu/~chet/



^ permalink raw reply	[flat|nested] 31+ messages in thread

* [TUHS] Sockets and the true UNIX
  2017-09-22 14:47       ` Chet Ramey
@ 2017-09-22 14:52         ` Lars Brinkhoff
  0 siblings, 0 replies; 31+ messages in thread
From: Lars Brinkhoff @ 2017-09-22 14:52 UTC (permalink / raw)


Chet Ramey wrote:
>> Clem Cole wrote:
>>> (describing MIT's job control that Joy swiped
>>> and added to BSD), 
>> The original concepts might have come from MIT, but the BSD implementation
>> was done by Jim Kulp at IIASA before it was folded into 4.1 BSD.
>
> Though to be fair I think Kulp was an MIT guy who was inspired by ITS.

Yes, that's what he wrote to me when I asked him about it.



^ permalink raw reply	[flat|nested] 31+ messages in thread

* [TUHS] Sockets and the true UNIX
  2017-09-22 14:42     ` Chet Ramey
  2017-09-22 14:47       ` Chet Ramey
@ 2017-09-22 14:49       ` Larry McVoy
  2017-09-22 14:57         ` Chet Ramey
  1 sibling, 1 reply; 31+ messages in thread
From: Larry McVoy @ 2017-09-22 14:49 UTC (permalink / raw)


On Fri, Sep 22, 2017 at 10:42:19AM -0400, Chet Ramey wrote:
> On 9/22/17 10:32 AM, Clem Cole wrote:
> 
> > (describing MIT's job control that Joy swiped
> > and added to BSD), 
> 
> The original concepts might have come from MIT, but the BSD implementation
> was done by Jim Kulp at IIASA before it was folded into 4.1 BSD.

Huh, I too thought Bill Joy did it.  Did he integrate it and get credit 
that way?



^ permalink raw reply	[flat|nested] 31+ messages in thread

* [TUHS] Sockets and the true UNIX
  2017-09-22 14:42     ` Chet Ramey
@ 2017-09-22 14:47       ` Chet Ramey
  2017-09-22 14:52         ` Lars Brinkhoff
  2017-09-22 14:49       ` Larry McVoy
  1 sibling, 1 reply; 31+ messages in thread
From: Chet Ramey @ 2017-09-22 14:47 UTC (permalink / raw)


On 9/22/17 10:42 AM, Chet Ramey wrote:
> On 9/22/17 10:32 AM, Clem Cole wrote:
> 
>> (describing MIT's job control that Joy swiped
>> and added to BSD), 
> 
> 
> The original concepts might have come from MIT, but the BSD implementation
> was done by Jim Kulp at IIASA before it was folded into 4.1 BSD.

Though to be fair I think Kulp was an MIT guy who was inspired by ITS.

-- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
		 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRU    chet at case.edu    http://cnswww.cns.cwru.edu/~chet/



^ permalink raw reply	[flat|nested] 31+ messages in thread

* [TUHS] Sockets and the true UNIX
  2017-09-22 14:32   ` Clem Cole
@ 2017-09-22 14:42     ` Chet Ramey
  2017-09-22 14:47       ` Chet Ramey
  2017-09-22 14:49       ` Larry McVoy
  0 siblings, 2 replies; 31+ messages in thread
From: Chet Ramey @ 2017-09-22 14:42 UTC (permalink / raw)


On 9/22/17 10:32 AM, Clem Cole wrote:

> (describing MIT's job control that Joy swiped
> and added to BSD), 


The original concepts might have come from MIT, but the BSD implementation
was done by Jim Kulp at IIASA before it was folded into 4.1 BSD.

-- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
		 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRU    chet at case.edu    http://cnswww.cns.cwru.edu/~chet/



^ permalink raw reply	[flat|nested] 31+ messages in thread

* [TUHS] Sockets and the true UNIX
  2017-09-22 10:36 ` Paul Ruizendaal
@ 2017-09-22 14:32   ` Clem Cole
  2017-09-22 14:42     ` Chet Ramey
  2017-09-22 18:00   ` Chris Torek
  1 sibling, 1 reply; 31+ messages in thread
From: Clem Cole @ 2017-09-22 14:32 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 5749 bytes --]

Paul - you made a couple of comments and asked a few questions which I'll
take a shot at best I can having been there.   Sam spent a few hours in my
apartment drinking beer and watching football, and sometimes doing laundry
much less and arguing networking stuff. (I had a washer and dryer in those
days and he had a vacuum cleaner which I borrowed from him).  Before CSRG,
Sam had been working for a networking start up in the lower bay who's name
escapes me at the moment.   IIRC they had been doing a terminal
concentrator of some type.  Previous to UCB, I had been at Tek.  Presotto
at DEC, Baden and George at Pr1me on the their 750.

Sundays were often a football and beer session, usually at Mike Carey and
my place.

On Fri, Sep 22, 2017 at 6:36 AM, Paul Ruizendaal <pnr at planet.nl> wrote:

>
> Could you elaborate on how Accent networking influenced Joy's sockets?
> ​ ...​
>

​The Accent system was definitely heavy on people's mind.   The 1979 CMU
SPICE machine proposal had made quite a splash (I should scan it, I still
have an original copy).  Bert Halstead and Gettys later told me the story
at MIT how they had a meeting when it came out and the whole tone was 'what
are we going to do what the CMU thing'​ (the result was Athena).

By the time I got Berkeley, most of the community was watching what was
going on.   The issue in the end with Accent was that they did wrote it in
Pascal and it was not done with a UNIX API base (hence Mach would be C and
build into BSD).   But Accent was the new cool system and UNIX was not
(yet) centered as the 'the' system in the DARPA community.

DARPA was still reeling from the move from the PDP-10, and a lot people
were not 100% happy with that  (we have been discussing rms's hating Unix
and still pushing ITS as an example - he was hardly the only one).   While
the VAX has the HW was officially the DARPA system, Stanford was pushing
for VMS if you remember.   Some folks want different HW from the VAX too.
CMU & MIT were pushing DARPA for the idea of person workstations (Stanford
would join that too soon after an CMU EE would do his grad work at Stanford
- Andy B and build the "SUN" -- taking the ideas from the old CMU
distributed front-end project to a level no one ever imagined).

The point is that DARPA is kinda mixed up and the politics are not as clear
as it might seem 30 years later.

Rashid had presented the ports idea and whole Accent network model at a
DARPA PI meeting at Berkeley, I want to say in the Fall 1981-ish (date may
be off by a year or two).   IIRC he gave the Berkeley systems seminar that
week too. All the UCB System people like me, had the paper and we were
abuzz about it.  Mike Powell of Cray DemOS fame, was teaching the Grad OS
course and he was mixed up in the argument; as was Bart Miller (now at U
Wisc) who was one of his students then.

Mike has us all writing different papers and proposing different ideas.   Who
specifically contributed what is hard to say at this point, although Joy
was the master craftsman and final author/implementor of what went into the
CRSG system.  At lot of test stuff was done in private code or in other UCB
OS's work.

But Joy was absolutely trying to come up with something for UNIX that would
give a lot of the functionality of the Accent in more UNIX centric manner
because he felt he needed to keep the DARPA focused on UNIX as the system
of choice.   Hence overlaying the file descriptor in the socket call while
Rashid in Mach, would leave ports as separate concept.




>
> ​I ​
> think this idea came from Sam Leffler, but perhaps he was inspired
> by something else (Accent?, Chaos?)
>
​Sam is a good friend and I give him credit for many things.  Primarily as
making sure the BSD system worked.   As I have always said about Joy, "he
types open curly brace, close curly brace and he patches faster than anyone
I have even known.​"

So, besides mopping up after Bill, the three things I know you can credit
to Sam are routed, rcp/rsh, and trailers in the headers.   Besides that, he
was in on everything, but what was his, what was Bills, what was someone's
else's is hard.

As one of my friends who did his undergrad @ MIT who was also a UCB grad
student when I there put it (describing MIT's job control that Joy swiped
and added to BSD), "Bill recognized a lot of great ideas and then peed on
them to make them smell like UCB."




>
> ​...​
>
>
> Last and least, when feeling argumentative I would claim that connection
> strings like "/dev/tcp/host:port" are simply parameter data blocks encoded
> in a string :^)

​Exactly... Although the way that worked, was actually a side effect/bug in
name.​




>
>
>
> ​...
>  Back in 1980 there seems
> ​ ​
> to have been a lot of confusion over message boundaries

​It really was before 1980.   Those of us that lived networking in the 70s,
we were still trying to figure out what was the right way to do a lot of
things.  By the early 80s we agreed we had to have it, but best known
methods were still off.    We were still fighting the wars of SNA vs.
DECnet vs Wangnet vs MAP vs YourNet vs TelCo vs ....   People had not yet
recognized Metcalfe's Law.​

Which really is an example of where engineers forget the technology is
actually driven by economics.    The 'winner' may not be the technology
that is the 'best' from 'theory.'

We forget that fact often on this list too I fear.   I know it hurts, a
number of things I have been part in my career I 'know' were 'better' than
what 'won' but it does not matter.  They were not economical.

Clem
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20170922/000ebe10/attachment.html>


^ permalink raw reply	[flat|nested] 31+ messages in thread

* [TUHS] Sockets and the true UNIX
       [not found] <mailman.1105.1506026200.3779.tuhs@minnie.tuhs.org>
@ 2017-09-22 10:36 ` Paul Ruizendaal
  2017-09-22 14:32   ` Clem Cole
  2017-09-22 18:00   ` Chris Torek
  0 siblings, 2 replies; 31+ messages in thread
From: Paul Ruizendaal @ 2017-09-22 10:36 UTC (permalink / raw)


Gentlemen,

Below some additional thoughts on the various observations posted
about this. Note that I was not a contemporary of these developments,
and I may stand corrected on some views.

> I'm pretty sure the two main System V based TCP/IP stacks were STREAMS
> based: the Lachman one (which I ported to the ETA-10 and to SCO Unix)
> and the Mentat one that was done for Sun.  The socket API was sort of
> bolted on top of the STREAMS stuff, you could get to the STREAMS stuff
> directly (I think, it's been a long time).


Yes, that is my understanding too. I think it goes back to the two
roots of networking on Unix: the 1974 Spider network at Murray Hill and
the 1975 Arpanet implementation of the UoI.

It would seem that Spider chose to expose the network as a device, whereas
UoI chose to expose it as a kind of pipe. This seems to have continued in
derivative work (Datakit/streams/STREAMS and BBN/BSD sockets respectively).

When these systems were developed networking was mostly over serial lines,
and to use serial drivers was not illogical (i.e. streams->STREAMS). By 1980
fast local area networks were spreading, and the idea to see the network as
a serial device started to suck.

Much of the initial modification work that Joy did on the BBN code was to
make it perform on early ethernet -- it had been designed for 50 kbps arpanet
links. Some of his speed hacks (such as trailing headers) were later
discarded.

Interestingly, Spider was conceived as a fast network (1.5 Mbps); the local
network at Murray Hill operated at that speed, and things were designed to
work over long distance T1 connections as well. This integrated fast LAN/WAN
idea seems to have been abandoned in Datakit. I have a question out to Sandy
Fraser to ask about the origins of this, but have not yet received a reply.

> The sockets stuff was something Joy created to compete with the CMU Accent
> networking system.  [...]  CMU was developing Accent on the Triple Drip
> PascAlto (aka the Perq) and had a formal networking model that was very clean
> and sexy.  There were a lot of people interested in workstations, the Andrew
> project (MIT is about to start Athena etc). So Bill creates the sockets
> interface, and to show that UNIX could be just as modern as Accent.

I've always thought that the Joy/Leffler API was a gradual development of
the UoI/BBN API. The main conceptual change seem to have been support for
multiple network systems (selectable network stack, expansion
of the address space to 14 bytes).

I don't quite see the link to Accent and Wikipedia offers little help here
https://en.wikipedia.org/wiki/Accent_kernel
Could you elaborate on how Accent networking influenced Joy's sockets?

> * There's no reason for
>   a separate listen() call (it takes a "backlog" argument but
>   in practice everyone defaults it and the kernel does strange
>   manipulations on it.)

Perhaps there is. The UoI/BBN API did not have a listen() call;
instead the open() call - if it was for a listening connection - blocked until
a connection occurred. The server process would then fork of a worker process
and re-issue the listening open() call for the next connection. This left a
time gap where the server would not be 'listening'.

The listen() call would create up to 'backlog' connection blocks in the
network code, so that this many clients could connect simultaneously
without user space intervention. Each accept() would hand over a (now
connected) connection block and add a fresh unconnected one to the backlog
list. I think this idea came from Sam Leffler, but perhaps he was inspired
by something else (Accent?, Chaos?)

Of course, this can be done with fewer system calls. The UoI/BBN system
used the open() call, with a pointer to a parameter data block as the 2nd
argument. Perhaps Joy/Leffler were of the opinion that parameter data
blocks were not very Unix-y, and hence spread it out over
socket()/connect()/bind()/listen() instead.

The UoI choice to overload the open() call and not create a new call
(analogous to the pipe() call) was entirely pragmatic: they felt this
was easier for keeping up with the updates coming out of Murray Hill
all the time.

> In particular, I have often thought that it would have been a better
> and more consistent with the philosophy to have it implemented as
> open("/dev/tcp") and so on.

I think this is perhaps an orthogonal topic: how does one map network names
to network addresses. The most ambitious was perhaps the "portal()" system
call contemplated by Joy, but soon abandoned. It may have been implemented
in the early 90's in BSD, but I'm not sure this was fully the same idea.

That said, making the the name mapping a user concern rather than a kernel
concern is indeed a missed opportunity.

Last and least, when feeling argumentative I would claim that connection
strings like "/dev/tcp/host:port" are simply parameter data blocks encoded
in a string :^)

>   This also knocks out the need for
>   SO_REUSEADDR, because the kernel can tell at the time of
>   the call that you are asking to be a server.  Either someone
>   else already is (error) or you win (success).

Under TCP/IP I'm not sure you can. The protocol specifies that you must
wait for a certain period of time (120 sec, if memory serves my right)
before reusing an address/port combo, so that all in-flight packets have
disappeared from the network. Only if one is sure that this is not an
issue one can use SO_REUSEADDR.

> Also, the profusion of system calls (send, recv, sendmsg, recvmsg,
> recvfrom) is quite unnecessary: at most, one needs the equivalent
> of sendmsg/recvmsg.

Today that would indeed seem to make sense. Back in 1980 there seems
to have been a lot of confusion over message boundaries, even in
stream connections. My understanding is that originally send() and
recv() were intended to communicate a borderless stream, whereas
sendmsg() and recvmsg() were intended to communicate distinct
messages, even if transmitted over a stream protocol.

Paul



^ permalink raw reply	[flat|nested] 31+ messages in thread

* [TUHS] Sockets and the true UNIX
  2017-09-21 16:13 ` Jon Steinhart
  2017-09-21 16:17   ` Larry McVoy
  2017-09-21 18:56   ` Bakul Shah
@ 2017-09-21 23:26   ` Dave Horsfall
  2 siblings, 0 replies; 31+ messages in thread
From: Dave Horsfall @ 2017-09-21 23:26 UTC (permalink / raw)


On Thu, 21 Sep 2017, Jon Steinhart wrote:

> In particular, I have often thought that it would have been a better and 
> more consistent with the philosophy to have it implemented as 
> open("/dev/tcp") and so on.  Granted that networking added some new 
> functionality that justified some of the system calls, just not 
> socket().

An old Unix manual referred to preferring "/dev/net" but that would've 
been a 9-track even parity tape drive under the nomenclature at the 
time...

-- 
Dave Horsfall DTM (VK2KFU)  "Those who don't understand security will suffer."



^ permalink raw reply	[flat|nested] 31+ messages in thread

* [TUHS] Sockets and the true UNIX
  2017-09-21 16:17   ` Larry McVoy
@ 2017-09-21 20:36     ` Chris Torek
  0 siblings, 0 replies; 31+ messages in thread
From: Chris Torek @ 2017-09-21 20:36 UTC (permalink / raw)


>... I've never been fond of the socket API though I
>am sympathetic, it's easy to do the easy parts as /dev/tcp but as I
>recall there are all sorts of weird cases that don't fit.

There's no reason it could not be done literally with entries
similar to /dev ones, since each socket() takes some arguments
(domain, type, protocol) which could be represented as directly
as:

    /socket/domain/type/protocol_or_default

for instance.  For the typical s = socket(PF_INET, SOCK_STREAM, 0)
you would open "/socket/inet/stream/tcp".  To distinguish
between UDP and RDP you'd open "/socket/inet/dgram/udp" vs
"/socket/inet/dgram/rdp".  Of course these days there's a
<type> constant for reliable datagram, and some of these are
just overcomplicating things: /socket/inet/tcp, /socket/inet/icmp,
etc., would actually suffice, but there's an obvious one
to one mapping from "socket" arguments to strings under
"/socket" or whatever top level name you want.

More fundamentally, a socket itself is only "half initialized".
It's neither bound to any local address, nor connected to any
remote address.  The BSD kernel interface introduced two
system calls, bind() and connect(), to let you specify these
two halves.  But these two calls are wrong!  It needs to be
*one* call, that does both operations simultaneously, so as to
allow you to specify that you want the system to provide the
local or remote address.  You supply one or both.  To supply
only one, pass in a NULL for the other one.

 * You supply only the remote address: "I want to connect to
   someone and I know their address, so just pick a local one
   that works."  This is currently written as connect().

 * You supply only the local address: "I want to be a server."
   This is currently written as bind() followed by listen()
   followed by a series of accept()s.  There's no reason for
   a separate listen() call (it takes a "backlog" argument but
   in practice everyone defaults it and the kernel does strange
   manipulations on it.)  This also knocks out the need for
   SO_REUSEADDR, because the kernel can tell at the time of
   the call that you are asking to be a server.  Either someone
   else already is (error) or you win (success).

 * You supply both halves at the same time.  (This is the
   special case FTP requires.)  "I want to connect to someone
   and I know he expects me to be address <L>.")

   This is a rare special case.  We need a way to reserve an
   address.  But it's rare, let's make *it* the complicated one.

   If you supply both halves at the same time, you can pass
   pointers to two all-zero addresses, both local and remote.
   This is different from passing in a NULL ("I don't care"): it
   means "I do care, get me an address" (reserving it) along with
   "I can't actually connect yet".

   Now you have the "3/4ths done" socket.  You know what your
   local address will be.  You give this to the remote, and
   call the system call again with the other guy's IP address
   this time.

   That's not optimal (sometimes the local address you want to
   use depends on the remote address you will use), but it's what
   the existing system calls give you anyway, as you can't call
   connect() first, you must call bind() first.

Also, the profusion of system calls (send, recv, sendmsg, recvmsg,
recvfrom) is quite unnecessary: at most, one needs the equivalent
of sendmsg/recvmsg.

So we could right away do this:

   bind(), listen(), connect()     =>     setconn()
   send(), sendmsg()               =>     sendmsg()
   recv(), recvfrom(), recvmsg()   =>     recvmsg()

even without figuring out how to properly implement /dev/tcp
or whatever.

Of course, it's a bit late. :-)

Chris


^ permalink raw reply	[flat|nested] 31+ messages in thread

* [TUHS] Sockets and the true UNIX
  2017-09-21 20:15         ` ron minnich
@ 2017-09-21 20:34           ` Clem Cole
  0 siblings, 0 replies; 31+ messages in thread
From: Clem Cole @ 2017-09-21 20:34 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 1517 bytes --]

On Thu, Sep 21, 2017 at 4:15 PM, ron minnich <rminnich at gmail.com> wrote:

>
>
>
> In that rfc, the idea of /dev/tcp and so on was advanced. To connect to
> harv, one might create /dev/tcp/harv, for example. It was not quite right.
> How do you name a port?
>
​The way we did in an early version was a crude hack in the nami.​
 /dev/tcp/
​host/opt1,opt2,opt3...
 It turns out nami left the unparsed chars alone and you could reach over
and grab them.  So after
/dev/tcp
was called the rest of the code did the work.   IIRC, Apollo Domain did
something similar as did the ChaosNet code.

The sockets stuff was something Joy created to compete with the CMU Accent
networking system.  Remember CSRG did not have the responsibility to
support networking on UNIX, BBN did.  The IP/TCP stack had been developed
by Gurwitz et al at BBN and did use the open/close/ioctl interface.    CMU
was developing Accent on the Triple Drip PascAlto (aka the Perq) and had a
formal networking model that was very clean and sexy.  There were a lot of
people interested in workstations, the Andrew project (MIT is about to
start Athena etc).   So Bill creates the sockets interface, and to show
that UNIX could be just as modern as Accent.

​He puts the BBN stack behind the interface.​   When 4.2 ships, no one
remembers the BBN API, it is th API that lived.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20170921/4bab3a14/attachment.html>


^ permalink raw reply	[flat|nested] 31+ messages in thread

* [TUHS] Sockets and the true UNIX
  2017-09-21 19:31       ` Bakul Shah
@ 2017-09-21 20:15         ` ron minnich
  2017-09-21 20:34           ` Clem Cole
  0 siblings, 1 reply; 31+ messages in thread
From: ron minnich @ 2017-09-21 20:15 UTC (permalink / raw)


you can go back and find an early RFC which discusses the notion of
/dev/tcp etc. I think I referenced it the last time this question came up.

In that rfc, the idea of /dev/tcp and so on was advanced. To connect to
harv, one might create /dev/tcp/harv, for example. It was not quite right.
How do you name a port? What file system operation in Unix corresponds to
what socket operation? There were lots of efforts from 77 or so on to get
sockets APIs done in a Unix style, i.e. not the style we ended up with, but
AFAIK nobody really worked it out until Plan 9 did it.

While it is true that socket()/bind/listen/accept/connect have stood the
test of time, I know when they were introduced many people thought of that
API as the first break in the Unix model, and a wrong turn, one that was
never fixed.

It's very easy to create a "file system" model for the network in a way
that doesn't make sense, e.g. if I have /dev/tcp/harv, and mv it to
/dev/tcp/prep, does that mean I close the connection to harv and open one
to prep? And so on. These issues were covered really nicely in a talk Rob
Pike gave in the 90s, but I can't find those slides any more and neither
can he. They're probably still on the venti server in the Unix room ... :-)

For an example of (IMHO) not getting it right, see:
http://www.cs.vu.nl/~herbertb/papers/osreview2008-2.pdf

I think anyone looking to put something like a network stack in the name
space should study on what Plan 9 did, not because it's the "right" or
"only' way to do it, but it's a way that worked.

But overall, in the 1977 timeframe, when (IIRC) there were no synthetic
file systems, I think we did not know how to think about "networks in the
file system" and we got what we got. By the time it was worked out, well,
it was too late to change.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20170921/143613a4/attachment.html>


^ permalink raw reply	[flat|nested] 31+ messages in thread

* [TUHS] Sockets and the true UNIX
  2017-09-21 19:13     ` Steve Simon
@ 2017-09-21 19:31       ` Bakul Shah
  2017-09-21 20:15         ` ron minnich
  0 siblings, 1 reply; 31+ messages in thread
From: Bakul Shah @ 2017-09-21 19:31 UTC (permalink / raw)


On Sep 21, 2017, at 12:13 PM, Steve Simon <steve at quintile.net> wrote:
> 
> sockets and streams, but what about tli (i think) transport level interface - that exists in one of the sys-v variants.
> 
> maybe that was streams and i just didn't realise or maybe it was something else?

Never played with TLI. Not a sys-v fan.



^ permalink raw reply	[flat|nested] 31+ messages in thread

* [TUHS] Sockets and the true UNIX
  2017-09-21 18:56   ` Bakul Shah
@ 2017-09-21 19:13     ` Steve Simon
  2017-09-21 19:31       ` Bakul Shah
  0 siblings, 1 reply; 31+ messages in thread
From: Steve Simon @ 2017-09-21 19:13 UTC (permalink / raw)


sockets and streams, but what about tli (i think) transport level interface - that exists in one of the sys-v variants.

maybe that was streams and i just didn't realise or maybe it was something else?

-Steve


> On 21 Sep 2017, at 19:56, Bakul Shah <bakul at bitblocks.com> wrote:
> 
>> On Thu, 21 Sep 2017 09:13:38 -0700 Jon Steinhart <jon at fourwinds.com> wrote:
>> Jon Steinhart writes:
>> 
>> Maybe this is naive of me, but I have never liked parts of the sockets
>> interface.  I understand that at some level it was a political/legal
>> keeping the networking code independent of the rest of the kernel.
>>> From a technical and historical standpoint, I view it as the tip of
>> the iceberg bloating the number of system calls.
> 
> In a sense the socket interface is a lower level interface
> compared to other syscalls. But complicated by the fact that
> it tries to be general.
> 
>> In particular, I have often thought that it would have been a better
>> and more consistent with the philosophy to have it implemented as
>> open("/dev/tcp") and so on.  Granted that networking added some new
>> functionality that justified some of the system calls, just not socket().
> 
> This is more or less how plan9 networking is done.  Among
> other things you can write scripts that can do networking even
> though the shell knows nothing about networking.  See
>    http://doc.cat-v.org/plan_9/4th_edition/papers/net/
> 
> The key is to realize that each protocol defines its own
> namespace so this fits naturally in plan9.  Allowing services
> (programs, kernel or drivers) to define their own namespaces
> and making them accessible via a tiny interface to any program
> is the main invention of plan9.  Similarly ctl files instead
> of ioctls.



^ permalink raw reply	[flat|nested] 31+ messages in thread

* [TUHS] Sockets and the true UNIX
  2017-09-21 16:13 ` Jon Steinhart
  2017-09-21 16:17   ` Larry McVoy
@ 2017-09-21 18:56   ` Bakul Shah
  2017-09-21 19:13     ` Steve Simon
  2017-09-21 23:26   ` Dave Horsfall
  2 siblings, 1 reply; 31+ messages in thread
From: Bakul Shah @ 2017-09-21 18:56 UTC (permalink / raw)


On Thu, 21 Sep 2017 09:13:38 -0700 Jon Steinhart <jon at fourwinds.com> wrote:
Jon Steinhart writes:
> 
> Maybe this is naive of me, but I have never liked parts of the sockets
> interface.  I understand that at some level it was a political/legal
> keeping the networking code independent of the rest of the kernel.
> >From a technical and historical standpoint, I view it as the tip of
> the iceberg bloating the number of system calls.

In a sense the socket interface is a lower level interface
compared to other syscalls. But complicated by the fact that
it tries to be general.

> In particular, I have often thought that it would have been a better
> and more consistent with the philosophy to have it implemented as
> open("/dev/tcp") and so on.  Granted that networking added some new
> functionality that justified some of the system calls, just not socket().

This is more or less how plan9 networking is done.  Among
other things you can write scripts that can do networking even
though the shell knows nothing about networking.  See
    http://doc.cat-v.org/plan_9/4th_edition/papers/net/

The key is to realize that each protocol defines its own
namespace so this fits naturally in plan9.  Allowing services
(programs, kernel or drivers) to define their own namespaces
and making them accessible via a tiny interface to any program
is the main invention of plan9.  Similarly ctl files instead
of ioctls.


^ permalink raw reply	[flat|nested] 31+ messages in thread

* [TUHS] Sockets and the true UNIX
  2017-09-21 16:10 ` Larry McVoy
  2017-09-21 16:20   ` David Edmondson
  2017-09-21 16:25   ` Clem Cole
@ 2017-09-21 18:26   ` Chet Ramey
  2 siblings, 0 replies; 31+ messages in thread
From: Chet Ramey @ 2017-09-21 18:26 UTC (permalink / raw)


On 9/21/17 12:10 PM, Larry McVoy wrote:

> I'm pretty sure the two main System V based TCP/IP stacks were STREAMS
> based: the Lachman one (which I ported to the ETA-10 and to SCO Unix)
> and the Mentat one that was done for Sun.  The socket API was sort of
> bolted on top of the STREAMS stuff, you could get to the STREAMS stuff
> directly (I think, it's been a long time).

Here's an article describing the work Sun did (whether it started at Mentat
or not) to initially provide the socket interface on SVR4:

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.29.7181&rep=rep1&type=pdf

It was my impression that they started with the BSD kernel implementation
and used it to create `socklib' and `sockmod'.

-- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
		 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRU    chet at case.edu    http://cnswww.cns.cwru.edu/~chet/


^ permalink raw reply	[flat|nested] 31+ messages in thread

* [TUHS] Sockets and the true UNIX
  2017-09-21 16:25   ` Clem Cole
@ 2017-09-21 16:27     ` Larry McVoy
  0 siblings, 0 replies; 31+ messages in thread
From: Larry McVoy @ 2017-09-21 16:27 UTC (permalink / raw)


On Thu, Sep 21, 2017 at 12:25:13PM -0400, Clem Cole wrote:
> On Thu, Sep 21, 2017 at 12:10 PM, Larry McVoy <lm at mcvoy.com> wrote:
> 
> >
> > The STREAMS stuff never performed well
> 
> ???Larry is being polite.   It sucked.???
> 
> ??? It was never designed for it.  dmr designed Streams to replace the tty
> handler.   I never understood  why the Summit guys insisted on forcing
> networking into it.???

+1 

I've always tried to make it clear that dmr's streams != Sys V STREAMS.
I thought dmr streams were cool, I detest the Sys V STREAMS.


^ permalink raw reply	[flat|nested] 31+ messages in thread

* [TUHS] Sockets and the true UNIX
  2017-09-21 16:10 ` Larry McVoy
  2017-09-21 16:20   ` David Edmondson
@ 2017-09-21 16:25   ` Clem Cole
  2017-09-21 16:27     ` Larry McVoy
  2017-09-21 18:26   ` Chet Ramey
  2 siblings, 1 reply; 31+ messages in thread
From: Clem Cole @ 2017-09-21 16:25 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 1001 bytes --]

On Thu, Sep 21, 2017 at 12:10 PM, Larry McVoy <lm at mcvoy.com> wrote:

>
> The STREAMS stuff never performed well

​Larry is being polite.   It sucked.​

​ It was never designed for it.  dmr designed Streams to replace the tty
handler.   I never understood  why the Summit guys insisted on forcing
networking into it.​



> and the BSD TCP stack or something
> ​ ​
> like it went into Solaris at some point.

​Right.   This is one of the places where SVR4 != Solaris

I was researching a book a long time ago and looked at this.  I had
Solaris, SVR4 and some other stuff at the time.  Like you bit rot has long
set in on the details, but I do remember that the primary thing that
Solaris had was support for Sun's threading model and the networking code
had to be a first class citizen ​or it was not going to scale.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20170921/2fdf86f8/attachment.html>


^ permalink raw reply	[flat|nested] 31+ messages in thread

* [TUHS] Sockets and the true UNIX
  2017-09-21 16:10 ` Larry McVoy
@ 2017-09-21 16:20   ` David Edmondson
  2017-09-21 16:25   ` Clem Cole
  2017-09-21 18:26   ` Chet Ramey
  2 siblings, 0 replies; 31+ messages in thread
From: David Edmondson @ 2017-09-21 16:20 UTC (permalink / raw)


On Thursday, 2017-09-21 at 09:10:40 -0700, Larry McVoy wrote:

> I'm pretty sure the two main System V based TCP/IP stacks were STREAMS
> based: the Lachman one (which I ported to the ETA-10 and to SCO Unix)
> and the Mentat one that was done for Sun.  The socket API was sort of
> bolted on top of the STREAMS stuff, you could get to the STREAMS stuff
> directly (I think, it's been a long time).

You could pop the socket module to get to the stream underneath, but
lots of streams functionality still works with it present.

dme.
-- 
Modern people tend to dance.


^ permalink raw reply	[flat|nested] 31+ messages in thread

* [TUHS] Sockets and the true UNIX
  2017-09-21 16:13 ` Jon Steinhart
@ 2017-09-21 16:17   ` Larry McVoy
  2017-09-21 20:36     ` Chris Torek
  2017-09-21 18:56   ` Bakul Shah
  2017-09-21 23:26   ` Dave Horsfall
  2 siblings, 1 reply; 31+ messages in thread
From: Larry McVoy @ 2017-09-21 16:17 UTC (permalink / raw)


On Thu, Sep 21, 2017 at 09:13:38AM -0700, Jon Steinhart wrote:
> Maybe this is naive of me, but I have never liked parts of the sockets
> interface.  I understand that at some level it was a political/legal
> keeping the networking code independent of the rest of the kernel.
> >From a technical and historical standpoint, I view it as the tip of
> the iceberg bloating the number of system calls.
> 
> In particular, I have often thought that it would have been a better
> and more consistent with the philosophy to have it implemented as
> open("/dev/tcp") and so on.  Granted that networking added some new
> functionality that justified some of the system calls, just not socket().

If you look in the lmbench code I sort of had similar thoughts but did
them as functions.  I've never been fond of the socket API though I
am sympathetic, it's easy to do the easy parts as /dev/tcp but as I
recall there are all sorts of weird cases that don't fit.  I've tried
to come up with a /dev/tcp style that covers all the cases and I failed.

lib_tcp.h
#include        <sys/types.h>
#include        <sys/socket.h>
#include        <netinet/in.h>
#include        <netdb.h>
#include        <arpa/inet.h>

int     tcp_server(int prog, int rdwr);
int     tcp_done(int prog);
int     tcp_accept(int sock, int rdwr);
int     tcp_connect(char *host, int prog, int rdwr);
void    sock_optimize(int sock, int rdwr);
int     sockport(int s);
#ifndef NO_PORTMAPPER
u_short pmap_getport(struct sockaddr_in *addr, u_long prognum, u_long versnum,
u_int protocol);
bool_t  pmap_set(u_long prognum, u_long versnum, u_long protocol, u_short port);
bool_t  pmap_unset(u_long prognum, u_long versnum);
#endif


^ permalink raw reply	[flat|nested] 31+ messages in thread

* [TUHS] Sockets and the true UNIX
  2017-09-21 16:01 Ian Zimmerman
  2017-09-21 16:07 ` Chet Ramey
  2017-09-21 16:10 ` Larry McVoy
@ 2017-09-21 16:13 ` Jon Steinhart
  2017-09-21 16:17   ` Larry McVoy
                     ` (2 more replies)
  2 siblings, 3 replies; 31+ messages in thread
From: Jon Steinhart @ 2017-09-21 16:13 UTC (permalink / raw)


Ian Zimmerman writes:
> This question is motivated by the posters for whom FreeBSD is not Unix
> enough :-)
> 
> Probably the best known contribution of the Berkeley branch of Unix is
> the sockets API for IP networking.  But today, if for no other reason
> than the X/Open group of standards, sockets are the preferred networking
> API everywhere, even on true AT&T derived UNIX variants.  So they must
> have been merged back at some point, or reimplemented.  My question is,
> when and how did that happen?
> 
> And if there isn't a simple answer because it happened at different
> times and in different ways for each variant, all the better :-)

Maybe this is naive of me, but I have never liked parts of the sockets
interface.  I understand that at some level it was a political/legal
keeping the networking code independent of the rest of the kernel.
From a technical and historical standpoint, I view it as the tip of
the iceberg bloating the number of system calls.

In particular, I have often thought that it would have been a better
and more consistent with the philosophy to have it implemented as
open("/dev/tcp") and so on.  Granted that networking added some new
functionality that justified some of the system calls, just not socket().

Comments?

Jon


^ permalink raw reply	[flat|nested] 31+ messages in thread

* [TUHS] Sockets and the true UNIX
  2017-09-21 16:01 Ian Zimmerman
  2017-09-21 16:07 ` Chet Ramey
@ 2017-09-21 16:10 ` Larry McVoy
  2017-09-21 16:20   ` David Edmondson
                     ` (2 more replies)
  2017-09-21 16:13 ` Jon Steinhart
  2 siblings, 3 replies; 31+ messages in thread
From: Larry McVoy @ 2017-09-21 16:10 UTC (permalink / raw)


On Thu, Sep 21, 2017 at 09:01:12AM -0700, Ian Zimmerman wrote:
> This question is motivated by the posters for whom FreeBSD is not Unix
> enough :-)
> 
> Probably the best known contribution of the Berkeley branch of Unix is
> the sockets API for IP networking.  But today, if for no other reason
> than the X/Open group of standards, sockets are the preferred networking
> API everywhere, even on true AT&T derived UNIX variants.  So they must
> have been merged back at some point, or reimplemented.  My question is,
> when and how did that happen?
> 
> And if there isn't a simple answer because it happened at different
> times and in different ways for each variant, all the better :-)

I'm pretty sure the two main System V based TCP/IP stacks were STREAMS
based: the Lachman one (which I ported to the ETA-10 and to SCO Unix)
and the Mentat one that was done for Sun.  The socket API was sort of
bolted on top of the STREAMS stuff, you could get to the STREAMS stuff
directly (I think, it's been a long time).

The STREAMS stuff never performed well and the BSD TCP stack or something
like it went into Solaris at some point.  That's another "I think".  If
you want the gory details I'll ask Neal Nuckolls, he would know, he was
in the networking group and worked closely with Mentat.


^ permalink raw reply	[flat|nested] 31+ messages in thread

* [TUHS] Sockets and the true UNIX
  2017-09-21 16:01 Ian Zimmerman
@ 2017-09-21 16:07 ` Chet Ramey
  2017-09-21 16:10 ` Larry McVoy
  2017-09-21 16:13 ` Jon Steinhart
  2 siblings, 0 replies; 31+ messages in thread
From: Chet Ramey @ 2017-09-21 16:07 UTC (permalink / raw)


On 9/21/17 12:01 PM, Ian Zimmerman wrote:
> This question is motivated by the posters for whom FreeBSD is not Unix
> enough :-)
> 
> Probably the best known contribution of the Berkeley branch of Unix is
> the sockets API for IP networking.  But today, if for no other reason
> than the X/Open group of standards, sockets are the preferred networking
> API everywhere, even on true AT&T derived UNIX variants.  So they must
> have been merged back at some point, or reimplemented.  My question is,
> when and how did that happen?

AT&T merged the sockets interface into SVR4, starting with the BSD code.
Sun did the work.

-- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
		 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRU    chet at case.edu    http://cnswww.cns.cwru.edu/~chet/


^ permalink raw reply	[flat|nested] 31+ messages in thread

* [TUHS] Sockets and the true UNIX
@ 2017-09-21 16:01 Ian Zimmerman
  2017-09-21 16:07 ` Chet Ramey
                   ` (2 more replies)
  0 siblings, 3 replies; 31+ messages in thread
From: Ian Zimmerman @ 2017-09-21 16:01 UTC (permalink / raw)


This question is motivated by the posters for whom FreeBSD is not Unix
enough :-)

Probably the best known contribution of the Berkeley branch of Unix is
the sockets API for IP networking.  But today, if for no other reason
than the X/Open group of standards, sockets are the preferred networking
API everywhere, even on true AT&T derived UNIX variants.  So they must
have been merged back at some point, or reimplemented.  My question is,
when and how did that happen?

And if there isn't a simple answer because it happened at different
times and in different ways for each variant, all the better :-)

-- 
Please don't Cc: me privately on mailing lists and Usenet,
if you also post the followup to the list or newsgroup.
Do obvious transformation on domain to reply privately _only_ on Usenet.


^ permalink raw reply	[flat|nested] 31+ messages in thread

end of thread, other threads:[~2017-09-25 18:55 UTC | newest]

Thread overview: 31+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-09-25 17:57 [TUHS] Sockets and the true UNIX Norman Wilson
2017-09-25 18:55 ` Clem Cole
     [not found] <mailman.1105.1506026200.3779.tuhs@minnie.tuhs.org>
2017-09-22 10:36 ` Paul Ruizendaal
2017-09-22 14:32   ` Clem Cole
2017-09-22 14:42     ` Chet Ramey
2017-09-22 14:47       ` Chet Ramey
2017-09-22 14:52         ` Lars Brinkhoff
2017-09-22 14:49       ` Larry McVoy
2017-09-22 14:57         ` Chet Ramey
2017-09-22 18:14           ` Chris Torek
2017-09-22 18:43             ` Clem Cole
2017-09-22 19:08             ` Chet Ramey
2017-09-22 20:57               ` Chris Torek
2017-09-24 18:04                 ` Chet Ramey
2017-09-22 18:00   ` Chris Torek
  -- strict thread matches above, loose matches on Subject: below --
2017-09-21 16:01 Ian Zimmerman
2017-09-21 16:07 ` Chet Ramey
2017-09-21 16:10 ` Larry McVoy
2017-09-21 16:20   ` David Edmondson
2017-09-21 16:25   ` Clem Cole
2017-09-21 16:27     ` Larry McVoy
2017-09-21 18:26   ` Chet Ramey
2017-09-21 16:13 ` Jon Steinhart
2017-09-21 16:17   ` Larry McVoy
2017-09-21 20:36     ` Chris Torek
2017-09-21 18:56   ` Bakul Shah
2017-09-21 19:13     ` Steve Simon
2017-09-21 19:31       ` Bakul Shah
2017-09-21 20:15         ` ron minnich
2017-09-21 20:34           ` Clem Cole
2017-09-21 23:26   ` Dave Horsfall

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).