[-- Attachment #1: Type: text/plain, Size: 130 bytes --] I am trying to remember when fd 2 (aka stderr) became a thing. I have a vague memory that it was post-v6 but that may be way off. [-- Attachment #2: Type: text/html, Size: 185 bytes --]
[-- Attachment #1: Type: text/plain, Size: 330 bytes --] At 2023-01-20T14:44:51-0800, ron minnich wrote: > I am trying to remember when fd 2 (aka stderr) became a thing. I have > a vague memory that it was post-v6 but that may be way off. The story goes that it swiftly followed the creation of troff. https://minnie.tuhs.org/pipermail/tuhs/2013-December/006113.html Regards, Branden [-- Attachment #2: signature.asc --] [-- Type: application/pgp-signature, Size: 833 bytes --]
[-- Attachment #1: Type: text/plain, Size: 745 bytes --] I will let others respond with specifics, but that was there in v5 at least. One of the many shocks I had on arrival at Google was that in production, fd2 was pretty much standard output, as all the logging went there. And since our machines were still 32-bit CPUs then, it was common to overflow the offset because the logging output was, well, not parsimonious. It felt like I'd entered bizarro world, until I realized that few anywhere really knew how Unix was supposed to be used. And they still don't, and never will. -rob On Sat, Jan 21, 2023 at 9:46 AM ron minnich <rminnich@gmail.com> wrote: > I am trying to remember when fd 2 (aka stderr) became a thing. I have a > vague memory that it was post-v6 but that may be way off. > > > [-- Attachment #2: Type: text/html, Size: 1503 bytes --]
On Sat, Jan 21, 2023 at 09:56:16AM +1100, Rob Pike wrote:
> I will let others respond with specifics, but that was there in v5 at least.
>
> One of the many shocks I had on arrival at Google was that in production,
> fd2 was pretty much standard output, as all the logging went there. And
> since our machines were still 32-bit CPUs then, it was common to overflow
> the offset because the logging output was, well, not parsimonious. It felt
> like I'd entered bizarro world, until I realized that few anywhere really
> knew how Unix was supposed to be used. And they still don't, and never will.
Isn't there a logging API complete with the ability to centralize the logs?
Yes. It is confirmed by a change in the init man page between v6 and
v7. This can be seen in the TUHS archive.
On Fri, Jan 20, 2023 at 5:46 PM ron minnich <rminnich@gmail.com> wrote:
>
> I am trying to remember when fd 2 (aka stderr) became a thing. I have a vague memory that it was post-v6 but that may be way off.
>
>
[-- Attachment #1: Type: text/plain, Size: 829 bytes --] Yes, but everyone adds the flag --alsologtostderr -rob On Sat, Jan 21, 2023 at 10:11 AM Larry McVoy <lm@mcvoy.com> wrote: > On Sat, Jan 21, 2023 at 09:56:16AM +1100, Rob Pike wrote: > > I will let others respond with specifics, but that was there in v5 at > least. > > > > One of the many shocks I had on arrival at Google was that in production, > > fd2 was pretty much standard output, as all the logging went there. And > > since our machines were still 32-bit CPUs then, it was common to overflow > > the offset because the logging output was, well, not parsimonious. It > felt > > like I'd entered bizarro world, until I realized that few anywhere really > > knew how Unix was supposed to be used. And they still don't, and never > will. > > Isn't there a logging API complete with the ability to centralize the logs? > [-- Attachment #2: Type: text/html, Size: 1449 bytes --]
That's dumb. Though I run slack like so slack > /dev/null 2>&1 & because whatever version I have spews garbage to stderr. Sigh. On Sat, Jan 21, 2023 at 10:14:25AM +1100, Rob Pike wrote: > Yes, but everyone adds the flag --alsologtostderr > > -rob > > > On Sat, Jan 21, 2023 at 10:11 AM Larry McVoy <lm@mcvoy.com> wrote: > > > On Sat, Jan 21, 2023 at 09:56:16AM +1100, Rob Pike wrote: > > > I will let others respond with specifics, but that was there in v5 at > > least. > > > > > > One of the many shocks I had on arrival at Google was that in production, > > > fd2 was pretty much standard output, as all the logging went there. And > > > since our machines were still 32-bit CPUs then, it was common to overflow > > > the offset because the logging output was, well, not parsimonious. It > > felt > > > like I'd entered bizarro world, until I realized that few anywhere really > > > knew how Unix was supposed to be used. And they still don't, and never > > will. > > > > Isn't there a logging API complete with the ability to centralize the logs? > > -- --- Larry McVoy Retired to fishing http://www.mcvoy.com/lm/boat
ron minnich writes:
> I am trying to remember when fd 2 (aka stderr) became a thing. I have a
> vague memory that it was post-v6 but that may be way off.
Related to this, according to Steve Johnson, stderr was not part of the
original stdio library; it was added as a side effect of the development
of troff for the C/A/T photoypesetter which projected images onto silver
photographic paper. That became very expensive when the Hunt brothers
cornered the silver market, and folks were asked to cut down on
phototypesetter use. It was not uncommon to send a job to the typesetter
only to get back a beautifully formatted page containing a cannot open
file error message. The stderr file pointer was born so that error messages
could go to the terminal instead of to the typesetter in order to save money.
[-- Attachment #1.1: Type: text/plain, Size: 1673 bytes --] Certainly fd 2 as a error output appears with Lesk's portable C library, which was included in V6 - see the last para on page 1 of his document: "Initially you are given three file descriptors by the system: 0, 1, and 2. File 0 is the standard input; it is normally the teletype in time-sharing or input data cards in batch. File 1 is the standard output; it is nor- mally the teletype in time-sharing or the line printer in batch. File 2 is the error file; it is an output file, nor- mally the same as file 1, except that when file 1 is diverted via a command line ’>’ operator, file 2 remains attached to the original destination, usually the terminal. It is used for error message output. These popular UNIX conventions are considered part of the C library specification. By closing 0 or 1, the default input or output may be re-directed; this can also be done on the command line by >file for output or <file for input." This document pre-dates ditroff. The idea/names of stdin/out/err (as opposed to fd 0, 1, 2) does not come about until Dennis does "typesetter C" which is described in K&R. That said, my memory is fd 2 as ana error path was there in Fourth or Fifth Edition, before Lesk did this library -- i.e. it was a convention that Dennis/Ken et al were all using -- when it is realized that printing errors on in the same output flow as the standard (normal) output was problematic in a pipe stream. ᐧ On Fri, Jan 20, 2023 at 5:46 PM ron minnich <rminnich@gmail.com> wrote: > I am trying to remember when fd 2 (aka stderr) became a thing. I have a > vague memory that it was post-v6 but that may be way off. > > > [-- Attachment #1.2: Type: text/html, Size: 3391 bytes --] [-- Attachment #2: iolib.pdf --] [-- Type: application/pdf, Size: 46203 bytes --]
[-- Attachment #1: Type: text/plain, Size: 2027 bytes --] On Sat, Jan 21, 2023, 8:45 AM Clem Cole <clemc@ccc.com> wrote: > Certainly fd 2 as a error output appears with Lesk's portable C library, > which was included in V6 - see the last para on page 1 of his document: > > "Initially you are given three file descriptors by the system: 0, 1, and > 2. File 0 is the standard input; it is normally the teletype in > time-sharing or input data cards in batch. File 1 is the standard output; > it is nor- mally the teletype in time-sharing or the line printer in batch. > File 2 is the error file; it is an output file, nor- mally the same as file > 1, except that when file 1 is diverted via a command line ’>’ operator, > file 2 remains attached to the original destination, usually the terminal. > It is used for error message output. These popular UNIX conventions are > considered part of the C library specification. By closing 0 or 1, the > default input or output may be re-directed; this can also be done on the > command line by >file for output or <file for input." > > This document pre-dates ditroff. The idea/names of stdin/out/err (as > opposed to fd 0, 1, 2) does not come about until Dennis does "typesetter C" > which is described in K&R. > > That said, my memory is fd 2 as ana error path was there in Fourth or > Fifth Edition, before Lesk did this library -- i.e. it was a convention > that Dennis/Ken et al were all using -- when it is realized that printing > errors on in the same output flow as the standard (normal) output was > problematic in a pipe stream. > ᐧ > It looks to be a mixed bag in 5th edition. Cc, ac, comm and find all printf the errors. Diff uses writes to fd 2. I didn't catalog further, but this kinda looks like a developing convention, not yet fully realized. . Warner On Fri, Jan 20, 2023 at 5:46 PM ron minnich <rminnich@gmail.com> wrote: > >> I am trying to remember when fd 2 (aka stderr) became a thing. I have a >> vague memory that it was post-v6 but that may be way off. >> >> >> [-- Attachment #2: Type: text/html, Size: 4227 bytes --]
[-- Attachment #1: Type: text/plain, Size: 2518 bytes --] On Sat, Jan 21, 2023, 10:34 AM Warner Losh <imp@bsdimp.com> wrote: > > > On Sat, Jan 21, 2023, 8:45 AM Clem Cole <clemc@ccc.com> wrote: > >> Certainly fd 2 as a error output appears with Lesk's portable C library, >> which was included in V6 - see the last para on page 1 of his document: >> >> "Initially you are given three file descriptors by the system: 0, 1, and >> 2. File 0 is the standard input; it is normally the teletype in >> time-sharing or input data cards in batch. File 1 is the standard output; >> it is nor- mally the teletype in time-sharing or the line printer in batch. >> File 2 is the error file; it is an output file, nor- mally the same as file >> 1, except that when file 1 is diverted via a command line ’>’ operator, >> file 2 remains attached to the original destination, usually the terminal. >> It is used for error message output. These popular UNIX conventions are >> considered part of the C library specification. By closing 0 or 1, the >> default input or output may be re-directed; this can also be done on the >> command line by >file for output or <file for input." >> >> This document pre-dates ditroff. The idea/names of stdin/out/err (as >> opposed to fd 0, 1, 2) does not come about until Dennis does "typesetter C" >> which is described in K&R. >> >> That said, my memory is fd 2 as ana error path was there in Fourth or >> Fifth Edition, before Lesk did this library -- i.e. it was a convention >> that Dennis/Ken et al were all using -- when it is realized that printing >> errors on in the same output flow as the standard (normal) output was >> problematic in a pipe stream. >> ᐧ >> > > It looks to be a mixed bag in 5th edition. Cc, ac, comm and find all > printf the errors. Diff uses writes to fd 2. I didn't catalog further, but > this kinda looks like a developing convention, not yet fully realized. . > Further digging shows v6 is quite similar. V7 revamps everything, with printf changed to a routine that writes to stderr. Each program did this differently. Granted, this is a small sample size. There wasn't a huge uptake of stderr/fd2 in v6 it seems, but v7 looks like it had a pass over the code to at least try for all errors going to fd2. Warner > Warner > > On Fri, Jan 20, 2023 at 5:46 PM ron minnich <rminnich@gmail.com> wrote: >> >>> I am trying to remember when fd 2 (aka stderr) became a thing. I have a >>> vague memory that it was post-v6 but that may be way off. >>> >>> >>> [-- Attachment #2: Type: text/html, Size: 5323 bytes --]
[-- Attachment #1: Type: text/plain, Size: 2436 bytes --] On Sat, Jan 21, 2023 at 12:50 PM Warner Losh <imp@bsdimp.com> wrote: > > Further digging shows v6 is quite similar. > As I said, by then they know it's an issue and the fd 2 has been introduced. The place to look is not the utilities, but the shell itself and what the kernel is doing in newproc et al for the tty handler. > V7 revamps everything, with printf changed to a routine that writes to > stderr. Each program did this differently. > IMO: Sort of two different things >>I think<<. They are related - the idea ( what the OS supplied to each process), as opposed to what/how each utility handled it. With typesetter C -- we get dmr's new stdio library (libS.a) replacing Lesk's portable C library. This was released independent of V6 as what DEC would have called a 'layered product" that ran on top of V6. As I said, this is the language that Dennis and Brian describe in K&R (version 1). With what was becoming UNIX/TS -- which don't seem to have a formal release -- we get what we all think of as the V7 kernel, Bourne's new shell and the new and updated tools suite as part of the system. So with V7, as you point out, most, but not all of the utilities have been updated to start using the new compiler (since by then Lesk library is not included), If the code were recompiled, that code would have had to used the new FILE * structure over the small integer fd, for the printf family. With V6 there were still a handful of utilities in assembler (like snoball IIRC), but by V7 I think most of them had been culled. And while your point is that you need to look at what each utility implemented, >>I believe<< that the key to Ron's question is what the shell and the kernel supplied [dates the idea], and >>then<< if the utility obeyed the new fd 2 as the error file is when it starts to be more formally enforced. So any further hunting should start there I would think. Clem > > Granted, this is a small sample size. There wasn't a huge uptake of > stderr/fd2 in v6 it seems, but v7 looks like it had a pass over the code to > at least try for all errors going to fd2. > > Warner > > >> Warner >> >> On Fri, Jan 20, 2023 at 5:46 PM ron minnich <rminnich@gmail.com> wrote: >>> >>>> I am trying to remember when fd 2 (aka stderr) became a thing. I have a >>>> vague memory that it was post-v6 but that may be way off. >>>> >>>> >>>> ᐧ [-- Attachment #2: Type: text/html, Size: 5659 bytes --]
[-- Attachment #1: Type: text/plain, Size: 343 bytes --] On Sat, Jan 21, 2023 at 12:34 PM Warner Losh <imp@bsdimp.com> wrote: > It looks to be a mixed bag in 5th edition. Cc, ac, comm and find all > printf the errors. Diff uses writes to fd 2. I didn't catalog further, but > this kinda looks like a developing convention, not yet fully realized. . > Fair enough and that makes sense... ᐧ [-- Attachment #2: Type: text/html, Size: 1140 bytes --]
[-- Attachment #1: Type: text/plain, Size: 3216 bytes --] On Sat, Jan 21, 2023, 11:26 AM Clem Cole <clemc@ccc.com> wrote: > > > On Sat, Jan 21, 2023 at 12:50 PM Warner Losh <imp@bsdimp.com> wrote: > > >> >> Further digging shows v6 is quite similar. >> > As I said, by then they know it's an issue and the fd 2 has been > introduced. The place to look is not the utilities, but the shell itself > and what the kernel is doing in newproc et al for the tty handler. > The utilities were all using printf so even if the concept was there, it wasn't widely used. I'll take a longer look, including these places.. though I'm not sure I see the connection to the tty driver... V7 revamps everything, with printf changed to a routine that writes to >> stderr. Each program did this differently. >> > IMO: Sort of two different things >>I think<<. They are related - the idea > ( what the OS supplied to each process), as opposed to what/how each > utility handled it. > Yea. Mostly i think each program had a different wrapper around stderr to minimize changes to the program... With typesetter C -- we get dmr's new stdio library (libS.a) replacing > Lesk's portable C library. This was released independent of V6 as what DEC > would have called a 'layered product" that ran on top of V6. As I said, > this is the language that Dennis and Brian describe in K&R (version 1). > Do we have extant copies of that? So far I have see the kernel patches that were "leaked" but not the rest of it. With what was becoming UNIX/TS -- which don't seem to have a formal release > -- we get what we all think of as the V7 kernel, Bourne's new shell and the > new and updated tools suite as part of the system > Yea. So with V7, as you point out, most, but not all of the utilities have been > updated to start using the new compiler (since by then Lesk library is not > included), If the code were recompiled, that code would have had to used > the new FILE * structure over the small integer fd, for the printf family. > With V6 there were still a handful of utilities in assembler (like snoball > IIRC), but by V7 I think most of them had been culled. > > And while your point is that you need to look at what each utility > implemented, >>I believe<< that the key to Ron's question is what the shell > and the kernel supplied [dates the idea], and >>then<< if the utility > obeyed the new fd 2 as the error file is when it starts to be more formally > enforced. So any further hunting should start there I would think. > Yea. Like many things, there was a transition... the most important bit is the shell. And that was more tricky to read through with the phone at breakfast... Warner Clem > > > >> >> Granted, this is a small sample size. There wasn't a huge uptake of >> stderr/fd2 in v6 it seems, but v7 looks like it had a pass over the code to >> at least try for all errors going to fd2. >> >> Warner >> >> >>> Warner >>> >>> On Fri, Jan 20, 2023 at 5:46 PM ron minnich <rminnich@gmail.com> wrote: >>>> >>>>> I am trying to remember when fd 2 (aka stderr) became a thing. I have >>>>> a vague memory that it was post-v6 but that may be way off. >>>>> >>>>> >>>>> ᐧ > [-- Attachment #2: Type: text/html, Size: 7775 bytes --]
[-- Attachment #1.1: Type: text/plain, Size: 573 bytes --] > On 21 Jan 2023, at 19:27, Clem Cole <clemc@ccc.com> wrote: > > > > On Sat, Jan 21, 2023 at 12:34 PM Warner Losh <imp@bsdimp.com <mailto:imp@bsdimp.com>> wrote: >> It looks to be a mixed bag in 5th edition. Cc, ac, comm and find all printf the errors. Diff uses writes to fd 2. I didn't catalog further, but this kinda looks like a developing convention, not yet fully realized. . > Fair enough and that makes sense... > ᐧ I do remember that pr -p actually read from FD 2. It probably still does. Very useful to align paper in the printer. jaap [-- Attachment #1.2: Type: text/html, Size: 1765 bytes --] [-- Attachment #2: Message signed with OpenPGP --] [-- Type: application/pgp-signature, Size: 267 bytes --]
On Sat, Jan 21, 2023 at 11:37:42AM -0700, Warner Losh wrote: > On Sat, Jan 21, 2023, 11:26 AM Clem Cole <clemc@ccc.com> wrote: > > With typesetter C -- we get dmr's new stdio library (libS.a) replacing > > Lesk's portable C library. This was released independent of V6 as what DEC > > would have called a 'layered product" that ran on top of V6. As I said, > > this is the language that Dennis and Brian describe in K&R (version 1). > > > > Do we have extant copies of that? So far I have see the kernel patches that > were "leaked" but not the rest of it. Of the "Phototypesetter Version 7" tape? The compiler and stdio show up in other places before v7. libp/iolib described in M. E. Lesk - The Portable C Library https://archive.org/details/ThePortableCLibrary_May75 (abbreviated in https://minnie.tuhs.org/cgi-bin/utree.pl?file=V6/usr/doc/iolib/iolib ) libS/stdio appears in a modified form in PWB/UNIX 1.0 As does "A New Input-Output Package" https://minnie.tuhs.org/cgi-bin/utree.pl?file=PWB1/sys/source/s4/stdio/ios.r included in "Documents for the PWB/UNIX Time-Sharing System Edition 1.0" https://bitsavers.org/pdf/att/unix/PWB_UNIX/Documents_for_the_PWB_UNIX_Time-Sharing_System_Edition_1.0_197710.pdf an older? version appears in tuhs/Applications/Spencer_Tapes/unsw3.tar.gz usr/source/stdio also part of https://minnie.tuhs.org/cgi-bin/utree.pl?file=Interdata732/usr/source/stdio https://minnie.tuhs.org/cgi-bin/utree.pl?file=Interdata732/usr/doc/cdoc/ios.nr tuhs/Distributions/Research/Tim_Shoppa_v6 (libS, stdio.h, no libS source) and MERT Release 0, NEWIO(III) newio - a new IO subroutine package https://www.tuhs.org/Archive/Documentation/Manuals/MERT_Release_0/Unix%20Programmer%20Manual%20-%20UPM%20-%20White%20Tabs/Subroutines%20-%20man3/newio.3.pdf "part of the effort in moving programs to the Interdata machine was devoted to making programs use the new standard I/O library" S. C. Johnson, D. M. Ritchie - Portability of C Programs and the UNIX System https://www.bell-labs.com/usr/dmr/www/portpap.html "In the first sense, the portability of C programs have been recently enhanced by new tools developed by center 127: ... * The Standard I/O package implements efficient portable I/O. This package should be used for nearly all new C development. The bulk of existing programs should be converted, as well." B. A. Tague - C Language Portability https://archive.org/details/CLanguagePortability_Sept77 "2.2 The Standard I/O Library In the past, there were two I/O libraries available. One was documented by A New Input-Output Package (Ritchie), and was made available through the -lS loader option. The other, older one was made available whenever a C program was being compiled; it was characterized, among other things, by use of the names fin and fout to control disposition of standard input and output files. The older library has now vanished, along with the -lS option. All programs will receive the new I/O library without any explicit action. In addition, the libraries obtained by -lc and -la have been merged; this combined library is accessed (where needed) by -lc or -l." Andrew R. Koenig - The C Environment of UNIX/TS https://bitsavers.org/pdf/plexus/Sys3/98-05036.5_Plexus_Sys3_UNIX_Programmers_Manual_Vol_2A_RevA_198409.pdf "New archiver ... uses long (double-word) integers, which can only be compiled by the new C compiler which we can't distribute yet." UNIX News, May-June 1976, 6-3 https://www.tuhs.org/Archive/Documentation/Usenix/Early_Newsletters/197605-unix-news-n6.pdf "a new tape is in the process of preparation, and should be available as soon as it clears the lawyers, for the usual free license and handling fee. It contains the new nroff, the new C, the new I/O library (faster & smaller than anything to date), and a bunch of other such things. It will be available from Bell, not the Center. Ken is going to send a formal notice to the News." UNIX News, November 1976, pg 1 https://www.tuhs.org/Archive/Documentation/Usenix/Early_Newsletters/197611-unix-news-n11.pdf 'Among the items listed are mini-UNIX (previously announced) and "PHOTOTYPESETTER VERSION 7" which is new.' https://www.tuhs.org/Archive/Documentation/Usenix/Early_Newsletters/197703-unix-news-v2n3.pdf "The principal speaker at the Phototypesetting session was Joe Ossana. He announced a new phototypesetting package, typesetter V7, which is or soon will be available from Western Electric for a $3300 license fee. The new package combines NROFF and TROFF and is written in C." https://archive.org/details/login_august-1977/page/n21/mode/2up https://www.usenix.org/system/files/login/articles/login_dec15_11_history_login.pdf https://www.bell-labs.com/usr/dmr/www/licenses/pricelist84.pdf has different items for "Phototypesetter Programmer's Workbench" "Phototypesetter V7" "Independent TROFF" these are also mentioned in https://archive.org/details/login_april83_issue/page/n13/mode/2up "On November 1, 1977 Toronto received a license for Phototypesetter V7" Salus QCU pg 123
[-- Attachment #1: Type: text/plain, Size: 1708 bytes --] On Sat, Jan 21, 2023 at 11:37 AM Warner Losh <imp@bsdimp.com> wrote: > > Yea. Like many things, there was a transition... the most important bit is > the shell. And that was more tricky to read through with the phone at > breakfast... > OK. I've dug a bit further and Clem and I have chatted... Here's the summary. We don't have V4's shell, alas, since all we have from that time period is the C kernel a few months before the 4th edition release. V5 /bin/sh closes fd2 and then dups fd1 to fd2. This creates fd2 as something special. V6 closes all FD's larger than 1 (2-15) and then does the dup(1) which it makes sure returns 2 or it closes the file. While there were features in V6 to allow use of fd2/stderr, few programs used then. And neither crypt nor passwd reads from fd2. crypt reads from fd0, while passwd doesn't read. It just replaces the hashed password with the new password. I've also looked at pr because >I do remember that pr -p actually read from FD 2. It probably still does. and that's not true in V7 at least... pr didn't have a 'p' arg :). Maybe later programs started to do these things, but most of what went on with V7 was a transition to most error messages on stderr, which typically went to stdout in V6. So, people remembering it coming in with V7 are right, in the sense it was the first release to do it consistently. And the people remembering V4 or V5 are also right, in a different sense because the shell was ensuring fd2 was a copy of fd1 there, which a couple of programs (diff) used for errors. And I believe that the impetus for the V7 changes was phototypesetting 'file not found' too often... But that last bit is mostly because I want to believe. Warner [-- Attachment #2: Type: text/html, Size: 2402 bytes --]
[-- Attachment #1: Type: text/plain, Size: 1903 bytes --] Thanks very much for this interesting archaeological dig. On Sun, Jan 22, 2023, 4:23 PM Warner Losh <imp@bsdimp.com> wrote: > > > On Sat, Jan 21, 2023 at 11:37 AM Warner Losh <imp@bsdimp.com> wrote: > >> >> Yea. Like many things, there was a transition... the most important bit >> is the shell. And that was more tricky to read through with the phone at >> breakfast... >> > > OK. I've dug a bit further and Clem and I have chatted... Here's the > summary. > > We don't have V4's shell, alas, since all we have from that time period is > the C kernel a few months before the 4th edition release. V5 /bin/sh closes > fd2 and then dups fd1 to fd2. This creates fd2 as something special. V6 > closes all FD's larger than 1 (2-15) and then does the dup(1) which it > makes sure returns 2 or it closes the file. While there were features in V6 > to allow use of fd2/stderr, few programs used then. > > And neither crypt nor passwd reads from fd2. crypt reads from fd0, while > passwd doesn't read. It just replaces the hashed password with the new > password. I've also looked at pr because > > >I do remember that pr -p actually read from FD 2. It probably still does. > > and that's not true in V7 at least... pr didn't have a 'p' arg :). Maybe > later programs started to do these things, but most of what went on with V7 > was a transition to most error messages on stderr, which typically went to > stdout in V6. > > So, people remembering it coming in with V7 are right, in the sense it was > the first release to do it consistently. And the people remembering V4 or > V5 are also right, in a different sense because the shell was ensuring fd2 > was a copy of fd1 there, which a couple of programs (diff) used for errors. > And I believe that the impetus for the V7 changes was phototypesetting > 'file not found' too often... But that last bit is mostly because I want > to believe. > > Warner > [-- Attachment #2: Type: text/html, Size: 2777 bytes --]
Warner Losh <imp@bsdimp.com> wrote:
> And I believe that the impetus for the V7 changes was phototypesetting
> 'file not found' too often... But that last bit is mostly because I want
> to believe.
Doug McIlroy has told that story here, search the archives.
It was indeed becase of the phototypesetter.
Arnold
[-- Attachment #1: Type: text/plain, Size: 496 bytes --] At one point it was like $22/page. On Sun, Jan 22, 2023 at 11:30 PM <arnold@skeeve.com> wrote: > Warner Losh <imp@bsdimp.com> wrote: > > > And I believe that the impetus for the V7 changes was phototypesetting > > 'file not found' too often... But that last bit is mostly because I want > > to believe. > > Doug McIlroy has told that story here, search the archives. > It was indeed becase of the phototypesetter. > > Arnold > -- James D. (jj) Johnston Chief Scientist, Immersion Networks [-- Attachment #2: Type: text/html, Size: 1009 bytes --]
[-- Attachment #1: Type: text/plain, Size: 1786 bytes --] At 2023-01-23T00:32:53-0800, James Johnston wrote: > At one point it was like $22/page. > > On Sun, Jan 22, 2023 at 11:30 PM <arnold@skeeve.com> wrote: > > > Warner Losh <imp@bsdimp.com> wrote: > > > > > And I believe that the impetus for the V7 changes was > > > phototypesetting 'file not found' too often... But that last bit > > > is mostly because I want to believe. > > > > Doug McIlroy has told that story here, search the archives. > > It was indeed becase of the phototypesetter. Steve Johnson's email[0] (which I cited in my earlier reply to this thread) said it took only "a couple of days" after acquistion of the phototypesetter for this to happen. But the C/A/T was acquired by 1973[1], and the Hunt Brothers' cornering of the silver market occurred in March 1980[2], over a year _after_ the release of V7[3]. It seems both of these can't be true. I wondered if maybe only a later typesetter used a silver-based photochemical development process, but that doesn't work with troff history either, as device-independent troff wasn't ready to go until about January 1981, targetting the C/A/T and the Autologic APS-5[4], the latter with support so fresh that it wasn't in the manual yet[5]. Can someone reconcile these points? Regards, Branden [0] https://minnie.tuhs.org/pipermail/tuhs/2013-December/006113.html [1] https://minnie.tuhs.org/cgi-bin/utree.pl?file=V4/man/man1/troff.1 [2] https://en.wikipedia.org/wiki/Silver_Thursday [3] https://minnie.tuhs.org/cgi-bin/utree.pl?file=V7 [4] https://www.tuhs.org/Archive/Documentation/Manuals/Unix_4.0/Volume_1/00_Annotated_Table_of_Contents.pdf (p. 2) [5] https://www.tuhs.org/Archive/Documentation/Manuals/Unix_4.0/Volume_1/C.1.2_NROFF_TROFF_Users_Manual.pdf (p. 34) [-- Attachment #2: signature.asc --] [-- Type: application/pgp-signature, Size: 833 bytes --]
[-- Attachment #1: Type: text/plain, Size: 2004 bytes --] Original troff targeted only the C/A/T typesetter. bwc > On Jan 23, 2023, at 3:58 AM, G. Branden Robinson <g.branden.robinson@gmail.com> wrote: > > At 2023-01-23T00:32:53-0800, James Johnston wrote: >> At one point it was like $22/page. >> >>> On Sun, Jan 22, 2023 at 11:30 PM <arnold@skeeve.com> wrote: >>> >>> Warner Losh <imp@bsdimp.com> wrote: >>> >>>> And I believe that the impetus for the V7 changes was >>>> phototypesetting 'file not found' too often... But that last bit >>>> is mostly because I want to believe. >>> >>> Doug McIlroy has told that story here, search the archives. >>> It was indeed becase of the phototypesetter. > > Steve Johnson's email[0] (which I cited in my earlier reply to this > thread) said it took only "a couple of days" after acquistion of the > phototypesetter for this to happen. But the C/A/T was acquired by > 1973[1], and the Hunt Brothers' cornering of the silver market occurred > in March 1980[2], over a year _after_ the release of V7[3]. > > It seems both of these can't be true. I wondered if maybe only a later > typesetter used a silver-based photochemical development process, but > that doesn't work with troff history either, as device-independent troff > wasn't ready to go until about January 1981, targetting the C/A/T and > the Autologic APS-5[4], the latter with support so fresh that it wasn't > in the manual yet[5]. > > Can someone reconcile these points? > > Regards, > Branden > > [0] https://minnie.tuhs.org/pipermail/tuhs/2013-December/006113.html > [1] https://minnie.tuhs.org/cgi-bin/utree.pl?file=V4/man/man1/troff.1 > [2] https://en.wikipedia.org/wiki/Silver_Thursday > [3] https://minnie.tuhs.org/cgi-bin/utree.pl?file=V7 > [4] https://www.tuhs.org/Archive/Documentation/Manuals/Unix_4.0/Volume_1/00_Annotated_Table_of_Contents.pdf > (p. 2) > [5] https://www.tuhs.org/Archive/Documentation/Manuals/Unix_4.0/Volume_1/C.1.2_NROFF_TROFF_Users_Manual.pdf > (p. 34) [-- Attachment #2: signature.asc --] [-- Type: application/octet-stream, Size: 833 bytes --] -----BEGIN PGP SIGNATURE----- iQIyBAABCAAdFiEEh3PWHWjjDgcrENwa0Z6cfXEmbc4FAmPOTDEACgkQ0Z6cfXEm bc7Svg/4/AS/2TEBEZXTkITFr91+ZE2BBR757MQ1IQT5ZlAtLM5kZBGA7Ba6D3Zc IJiwORybz7K7cKy/l5Y+fQGTZP5YWyVMhHIS9QE2DsNKDCplg9GwPRt0aV6/RR9O SU6xH5DaWGJPhtIuBZy3GpBLWL/394RZdCB0vTw75dCaq3KnCQEc7ECCN9dM4FA5 N7lb9hAQz3XPthpL145TBBlL6bGny9D1BcZqXnrP0MYS4XKTDSD/iTisdp78GAbm VD4Cy1zLmQFSLSUprD1Y5enZyfPGoNDDAPl4meTDJZX83jfgHBcTL5w0qfQ9QZoV Ir7wMkSU5T0EBQ3RLe9mgIU7DbWsgypI/nZoKvBBOM1srbPfnwAn71cfmQeClSxn F9ngP2Vkj66/kQoqoNhb+ACf4WlXCRAVywjXKvR/owgp2nDMm/KxpqLu9fq+9T8G BwUN7B8baIJG6/cXmNctUCllu0CtzDCNelrEzTrGz/8g8Ra9VLJZQHlh6XfW07Ca m5ba/CctjzcHYLj2m2rPmcWyU52rJEDFSOtkz+rrjGQ1pciXy6OMsKk7gg7kAFwa HGwlJNHY4oqwstkuyisqcqX07wXmaBF9sjEJJ/bPYQIGQkDcn/xtLjLdHOYw7mwC CiUbPX4bqjQVty5UztVBe4TYfIX5FhNr6gntc4nHfEeumeVQtg== =1HzR -----END PGP SIGNATURE-----
Indeed but the output format was not too hard to decipher and we wrote
drivers for other things.
One of the most odd projects I helped with was a program called verset
which was a C/A/T type setter emulator that output to a Versetec printer
plotter.
This was my friend George Toth’s baby. What he did was go down to the
NRL and get a complete printout of the typefaces on their C/A/T printed
on film at about 72pt.
He then cut them into individual pieces. He build a flying spot
scanner out of an oscilliscoe with a photomultiplier tube in a scope
camera housing.
Now in the building next to ours was at the time the worlds finest
Scanning Transmission Electron Microscope. It was driven by a
PDP-11/20. Esentially the beam scanned back and forth and there was a
detected at the bottom. So George, would take the drive and sense wires
off the microscope and connect them to the oscilloscope/camera
combination. He’d take one letter and stick it to the face of the
oscilliscope. We would then bring up the microscope software and tell
it to scan a sample. It would do so loading the image into the
framebuffer. We would then boot up MiniUnix and read the data out of
the framebuffer and store it on an RK05 pack that we’d later take over
to our main machine, the PDP-11/45 running a full up (hacked V6) UNIX.
It was probably the worlds most expensive text scanner in the world.
Of course, I did some favors for that department, so I got free access
to that machine when it wasn’t being used by actual microscope
operators.
------ Original Message ------
From "Brantley Coile" <brantley@coraid.com>
To "G. Branden Robinson" <g.branden.robinson@gmail.com>
Cc tuhs@tuhs.org
Date 1/23/2023 6:49:03 AM
Subject [TUHS] Re: FD 2
>Original troff targeted only the C/A/T typesetter.
>bwc
>
>> And I believe that the impetus for the V7 changes was phototypesetting >> 'file not found' too often... But that last bit is mostly because I want >> to believe. > Doug McIlroy has told that story here, search the archives. > It was indeed becase of the phototypesetter. If I ever said that I must have been smoking something at the time. The phototypesetter is a dramatic example, but a minor cause. The major cause was pipes. Downstream programs fail when they receive diagnostics rather than data. And users never see the reason why. Doug On Mon, Jan 23, 2023 at 3:34 AM James Johnston <audioskeptic@gmail.com> wrote: > > At one point it was like $22/page. > > On Sun, Jan 22, 2023 at 11:30 PM <arnold@skeeve.com> wrote: >> >> Warner Losh <imp@bsdimp.com> wrote: >> >> > And I believe that the impetus for the V7 changes was phototypesetting >> > 'file not found' too often... But that last bit is mostly because I want >> > to believe. >> >> Doug McIlroy has told that story here, search the archives. >> It was indeed becase of the phototypesetter. >> >> Arnold > > > > -- > James D. (jj) Johnston > > Chief Scientist, Immersion Networks
What a great story!
On January 23, 2023 9:25:25 AM EST, Ronald Natalie <ron@ronnatalie.com> wrote:
>Indeed but the output format was not too hard to decipher and we wrote drivers for other things.
>
>One of the most odd projects I helped with was a program called verset which was a C/A/T type setter emulator that output to a Versetec printer plotter.
>This was my friend George Toth’s baby. What he did was go down to the NRL and get a complete printout of the typefaces on their C/A/T printed on film at about 72pt.
>He then cut them into individual pieces. He build a flying spot scanner out of an oscilliscoe with a photomultiplier tube in a scope camera housing.
>
>Now in the building next to ours was at the time the worlds finest Scanning Transmission Electron Microscope. It was driven by a PDP-11/20. Esentially the beam scanned back and forth and there was a detected at the bottom. So George, would take the drive and sense wires off the microscope and connect them to the oscilloscope/camera combination. He’d take one letter and stick it to the face of the oscilliscope. We would then bring up the microscope software and tell it to scan a sample. It would do so loading the image into the framebuffer. We would then boot up MiniUnix and read the data out of the framebuffer and store it on an RK05 pack that we’d later take over to our main machine, the PDP-11/45 running a full up (hacked V6) UNIX.
>
>It was probably the worlds most expensive text scanner in the world. Of course, I did some favors for that department, so I got free access to that machine when it wasn’t being used by actual microscope operators.
>
>------ Original Message ------
>From "Brantley Coile" <brantley@coraid.com>
>To "G. Branden Robinson" <g.branden.robinson@gmail.com>
>Cc tuhs@tuhs.org
>Date 1/23/2023 6:49:03 AM
>Subject [TUHS] Re: FD 2
>
>> Original troff targeted only the C/A/T typesetter.
>> bwc
>>
>
I stand corrected. Thanks.
Arnold
Douglas McIlroy <douglas.mcilroy@dartmouth.edu> wrote:
> >> And I believe that the impetus for the V7 changes was phototypesetting
> >> 'file not found' too often... But that last bit is mostly because I want
> >> to believe.
>
> > Doug McIlroy has told that story here, search the archives.
> > It was indeed becase of the phototypesetter.
>
> If I ever said that I must have been smoking something at the time.
> The phototypesetter is a dramatic example, but a minor cause.
> The major cause was pipes. Downstream programs fail
> when they receive diagnostics rather than data. And users
> never see the reason why.
>
> Doug
>
> On Mon, Jan 23, 2023 at 3:34 AM James Johnston <audioskeptic@gmail.com> wrote:
> >
> > At one point it was like $22/page.
> >
> > On Sun, Jan 22, 2023 at 11:30 PM <arnold@skeeve.com> wrote:
> >>
> >> Warner Losh <imp@bsdimp.com> wrote:
> >>
> >> > And I believe that the impetus for the V7 changes was phototypesetting
> >> > 'file not found' too often... But that last bit is mostly because I want
> >> > to believe.
> >>
> >> Doug McIlroy has told that story here, search the archives.
> >> It was indeed becase of the phototypesetter.
> >>
> >> Arnold
> >
> >
> >
> > --
> > James D. (jj) Johnston
> >
> > Chief Scientist, Immersion Networks
[-- Attachment #1: Type: text/plain, Size: 2762 bytes --] On Sun, Jan 22, 2023 at 2:23 PM Warner Losh <imp@bsdimp.com> wrote: > > > On Sat, Jan 21, 2023 at 11:37 AM Warner Losh <imp@bsdimp.com> wrote: > >> >> Yea. Like many things, there was a transition... the most important bit >> is the shell. And that was more tricky to read through with the phone at >> breakfast... >> > > OK. I've dug a bit further and Clem and I have chatted... Here's the > summary. > > We don't have V4's shell, alas, since all we have from that time period is > the C kernel a few months before the 4th edition release. V5 /bin/sh closes > fd2 and then dups fd1 to fd2. This creates fd2 as something special. V6 > closes all FD's larger than 1 (2-15) and then does the dup(1) which it > makes sure returns 2 or it closes the file. While there were features in V6 > to allow use of fd2/stderr, few programs used then. > > And neither crypt nor passwd reads from fd2. crypt reads from fd0, while > passwd doesn't read. It just replaces the hashed password with the new > password. I've also looked at pr because > > >I do remember that pr -p actually read from FD 2. It probably still does. > > and that's not true in V7 at least... pr didn't have a 'p' arg :). Maybe > later programs started to do these things, but most of what went on with V7 > was a transition to most error messages on stderr, which typically went to > stdout in V6. > > So, people remembering it coming in with V7 are right, in the sense it was > the first release to do it consistently. And the people remembering V4 or > V5 are also right, in a different sense because the shell was ensuring fd2 > was a copy of fd1 there, which a couple of programs (diff) used for errors. > And I believe that the impetus for the V7 changes was phototypesetting > 'file not found' too often... But that last bit is mostly because I want > to believe. > One last historical footnote. In researching a forthcoming article for the 30th anniversary of FreeBSD, I noticed the following: https://www.bell-labs.com/usr/dmr/www/hist.pdf contains Dennis Ritchie's history of the early days of Unix. It was first published in 1979 (so right around the time V7 was being finalized), and revised/reprinted in the famous AT&T Bell Laboratories Technical Journal 63 No. 6 Part 2, October 1984 issue. He talks about FD 0 and FD 1 being special in the pdp-7 implementation on page 4: "The main loop of the shell went as follows: 1) The shell closed all its open files, then opened the terminal special file for standard input and output (file descriptors 0 and 1). ..." No mention is made of when fd 2 became standard error in this paper, which does mention a lot of other unix milestones (hierarchical notation for directories, fork/exec changes, pipes, etc), but not this one. > Warner > [-- Attachment #2: Type: text/html, Size: 3998 bytes --]
[-- Attachment #1: Type: text/plain, Size: 755 bytes --] >"The main loop of the shell went as follows: >1) The shell closed all its open files, then opened the terminal >special file for standard input and output (file descriptors 0 and 1). > Unfortunately, the source code says otherwise. None of shells V6, PWB, V7 do anything like is mentioned above. They assume 0 and 1 (and 2) are already open. The only fd redirection they do is when you do pipes or redirection on the command line. Where this is done is, as I posted earlier, in /etc/init. Init opens the tty device and dups it to 1 and then invokes either the shell (if we're in single user mode) or getty for interactive mode. This was done in V6 and PWB (1). In V7, init added a second dup for file descriptor 2. > [-- Attachment #2: Type: text/html, Size: 2500 bytes --]
[-- Attachment #1: Type: text/plain, Size: 1144 bytes --] On Sun, Jan 29, 2023, 12:21 PM Ron Natalie <ron@ronnatalie.com> wrote: > > > > "The main loop of the shell went as follows: > 1) The shell closed all its open files, then opened the terminal special > file for standard input and output (file descriptors 0 and 1). > > Unfortunately, the source code says otherwise. None of shells V6, PWB, > V7 do anything like is mentioned above. They assume 0 and 1 (and 2) are > already open. > The only fd redirection they do is when you do pipes or redirection on the > command line. > This was in reference to the pdp-7 unix implementation. I didn't check the pdp7 version though. So what you said is true about the later versions for sure, starting with v5. Where this is done is, as I posted earlier, in /etc/init. Init opens the > tty device and dups it to 1 and then invokes either the shell (if we're in > single user mode) or getty for interactive mode. > This was done in V6 and PWB (1). In V7, init added a second dup for file > descriptor 2. > Yes. I quite enjoyed that. My surprise was Dennis' paper didn't mention the innovation since it was otherwise quote detailed. Warner > [-- Attachment #2: Type: text/html, Size: 3255 bytes --]
Ron Natalie wrote: > Warner Losh wrote: > >"The main loop of the shell went as follows: > >1) The shell closed all its open files, then opened the terminal > >special file for standard input and output (file descriptors 0 and 1). > > > Unfortunately, the source code says otherwise. None of shells V6, PWB, > V7 do anything like is mentioned above. They assume 0 and 1 (and 2) > are already open. > The only fd redirection they do is when you do pipes or redirection on > the command line. > > Where this is done is, as I posted earlier, in /etc/init. Init opens > the tty device and dups it to 1 and then invokes either the shell (if > we're in single user mode) or getty for interactive mode. > This was done in V6 and PWB (1). In V7, init added a second dup for > file descriptor 2. I think the text Warner quoted applied to only the pre-fork version of the system, since it's preceeded by: "Processes (independently executing entities) existed very early in PDP-7 Unix. There were in fact precisely two of them, one for each of the two terminals attached to the machine. There was no fork, wait, or exec. There was an exit, but its meaning was rather different, as will be seen. and ends with: "4) The command did its work, then terminated by calling exit. The exit call caused the system to read in a fresh copy of the shell over the terminated command, then to jump to its start (and thus in effect to go to step 1)." In the (late) PDP-7 Unix we have sources for, FDs 1 & 2 are established in init at labels init1 (for the user on the console tty) and init2 (for the user on the display & display keyboard) after forking and doing the job of the login executable: https://github.com/DoctorWkt/pdp7-unix/blob/master/src/cmd/init.s A remaining mystery about the set of PDP-7 Unix sources we have is that cat.s (and only cat.s?) writes error messages to FD 8, but I haven't seen any place that sets FD 8: https://github.com/DoctorWkt/pdp7-unix/blob/master/scans/cat.s#L43 https://github.com/DoctorWkt/pdp7-unix/blob/master/scans/cat.s#L45 https://github.com/DoctorWkt/pdp7-unix/blob/master/scans/cat.s#L51 Listing scan at: https://www.tuhs.org/Archive/Distributions/Research/McIlroy_v0/06-5-12.pdf#page=21
[-- Attachment #1: Type: text/plain, Size: 2476 bytes --] On Sun, Jan 29, 2023, 5:25 PM Phil Budne <phil@ultimate.com> wrote: > Ron Natalie wrote: > > Warner Losh wrote: > > >"The main loop of the shell went as follows: > > >1) The shell closed all its open files, then opened the terminal > > >special file for standard input and output (file descriptors 0 and 1). > > > > > Unfortunately, the source code says otherwise. None of shells V6, PWB, > > V7 do anything like is mentioned above. They assume 0 and 1 (and 2) > > are already open. > > The only fd redirection they do is when you do pipes or redirection on > > the command line. > > > > Where this is done is, as I posted earlier, in /etc/init. Init opens > > the tty device and dups it to 1 and then invokes either the shell (if > > we're in single user mode) or getty for interactive mode. > > This was done in V6 and PWB (1). In V7, init added a second dup for > > file descriptor 2. > > I think the text Warner quoted applied to only the pre-fork version of > the system, since it's preceeded by: > > "Processes (independently executing entities) existed very early in > PDP-7 Unix. There were in fact precisely two of them, one for each > of the two terminals attached to the machine. There was no fork, > wait, or exec. There was an exit, but its meaning was rather > different, as will be seen. > > and ends with: > > "4) The command did its work, then terminated by calling exit. The > exit call caused the system to read in a fresh copy of the shell > over the terminated command, then to jump to its start (and thus in > effect to go to step 1)." > > In the (late) PDP-7 Unix we have sources for, FDs 1 & 2 are > established in init at labels init1 (for the user on the console tty) > and init2 (for the user on the display & display keyboard) after forking > and doing the job of the login executable: > > https://github.com/DoctorWkt/pdp7-unix/blob/master/src/cmd/init.s It looks like 0 & 1 to me. Am I missing something? Warner > > A remaining mystery about the set of PDP-7 Unix sources we have is > that cat.s (and only cat.s?) writes error messages to FD 8, but I > haven't seen any place that sets FD 8: > > https://github.com/DoctorWkt/pdp7-unix/blob/master/scans/cat.s#L43 > https://github.com/DoctorWkt/pdp7-unix/blob/master/scans/cat.s#L45 > https://github.com/DoctorWkt/pdp7-unix/blob/master/scans/cat.s#L51 > > Listing scan at: > > https://www.tuhs.org/Archive/Distributions/Research/McIlroy_v0/06-5-12.pdf#page=21 > > [-- Attachment #2: Type: text/html, Size: 3959 bytes --]
> > Where this is done is, as I posted earlier, in /etc/init. Init opens the
> > tty device and dups it to 1 and then invokes either the shell (if we're in
> > single user mode) or getty for interactive mode.
> > This was done in V6 and PWB (1). In V7, init added a second dup for file
> > descriptor 2.
What I really like is how in V8 - V10 /dev/tty was a (sym)link to /dev/fd/3,
and init simply did one more dup call. That eliminated the special tty device
driver and was a lovely generalization / simplification. It's too bad this
was never picked in the commercial Unixes or in Linux.
Arnold
[-- Attachment #1: Type: text/plain, Size: 928 bytes --] In Plan 9 we made /dev/tty (there called /dev/cons) fully virtual, a fiction that provided the semantics. I agree that it's a pity the /dev/tty kludge persists as a kernel hack. And then we have pttys, speaking of pitys. -rob On Mon, Jan 30, 2023 at 6:50 PM <arnold@skeeve.com> wrote: > > > Where this is done is, as I posted earlier, in /etc/init. Init > opens the > > > tty device and dups it to 1 and then invokes either the shell (if > we're in > > > single user mode) or getty for interactive mode. > > > This was done in V6 and PWB (1). In V7, init added a second dup for > file > > > descriptor 2. > > What I really like is how in V8 - V10 /dev/tty was a (sym)link to > /dev/fd/3, > and init simply did one more dup call. That eliminated the special tty > device > driver and was a lovely generalization / simplification. It's too bad this > was never picked in the commercial Unixes or in Linux. > > Arnold > [-- Attachment #2: Type: text/html, Size: 1705 bytes --]
[-- Attachment #1: Type: text/plain, Size: 556 bytes --] On Mon, Jan 30, 2023 at 2:50 AM <arnold@skeeve.com> wrote: What I really like is how in V8 - V10 /dev/tty was a (sym)link to /dev/fd/3, and init simply did one more dup call. That eliminated the special tty device driver and was a lovely generalization / simplification. It's too bad this was never picked in the commercial Unixes or in Linux. That's problematic. I often need to redirect 2>&1 in order to get a stream of both output and error into a file or into less(1), without interfering with *input* from the terminal, as for example a password. [-- Attachment #2: Type: text/html, Size: 1061 bytes --]
John Cowan <cowan@ccil.org> wrote:
> On Mon, Jan 30, 2023 at 2:50 AM <arnold@skeeve.com> wrote:
>
> > What I really like is how in V8 - V10 /dev/tty was a (sym)link to /dev/fd/3,
> > and init simply did one more dup call. That eliminated the special tty
> > device
> > driver and was a lovely generalization / simplification. It's too bad this
> > was never picked in the commercial Unixes or in Linux.
>
> That's problematic. I often need to redirect 2>&1 in order to get a stream
> of both output and error into a file or into less(1), without interfering
> with *input* from the terminal, as for example a password.
I don't see how. Input is from fd 0. fd 3 is open on the original terminal
device by init, so even if you closed fd 0, opening /dev/tty gets you fd 3
gets you the original tty.
Granted, someone could also redirect fd 3, but that's unlikely in most cases.
Arnold
On Mon, Jan 30, 2023 at 07:09:55PM +1100, Rob Pike wrote:
> And then we have pttys, speaking of pitys.
I'm not seeing how you do stuff like ssh into a remote system and have
job control, etc, work without some sort of tty.
On Mon, Jan 30, 2023 at 10:02 AM Larry McVoy <lm@mcvoy.com> wrote:
> On Mon, Jan 30, 2023 at 07:09:55PM +1100, Rob Pike wrote:
> > And then we have pttys, speaking of pitys.
>
> I'm not seeing how you do stuff like ssh into a remote system and have
> job control, etc, work without some sort of tty.
You don't. But perhaps that model isn't super great.
There was no job control on plan9 and I can't say I ever missed it. If
I needed another terminal, I just swept open another window. Job
control, even remote access a la SSH (or telnet, or rlogin), are a bit
of an historical accident. If, instead, my computing environment is
the set of shared resources I've imported into my system, then I don't
necessarily need something like that. The plan9 `cpu` command, for
access to a remote CPU server, conceptually brought the CPU server to
you, not the other way around.
It was a very different model.
- Dan C.
On Mon, Jan 30, 2023 at 10:16:25AM -0500, Dan Cross wrote:
> On Mon, Jan 30, 2023 at 10:02 AM Larry McVoy <lm@mcvoy.com> wrote:
> > On Mon, Jan 30, 2023 at 07:09:55PM +1100, Rob Pike wrote:
> > > And then we have pttys, speaking of pitys.
> >
> > I'm not seeing how you do stuff like ssh into a remote system and have
> > job control, etc, work without some sort of tty.
>
> You don't. But perhaps that model isn't super great.
>
> There was no job control on plan9 and I can't say I ever missed it. If
> I needed another terminal, I just swept open another window. Job
> control, even remote access a la SSH (or telnet, or rlogin), are a bit
> of an historical accident. If, instead, my computing environment is
> the set of shared resources I've imported into my system, then I don't
> necessarily need something like that. The plan9 `cpu` command, for
> access to a remote CPU server, conceptually brought the CPU server to
> you, not the other way around.
>
> It was a very different model.
$ vi foo.c
hack, hack
^Z
$ make
test test test, broken
$ fg
Yes, I could do that in 2 different terminals but that mode of working
is extremely useful, works when I don't have a windowing system, I'm
on the console.
Not having that model is a deal breaker for me, and I suspect a non
trivial number of other people.
On Mon, Jan 30, 2023 at 10:27 AM Larry McVoy <lm@mcvoy.com> wrote: > On Mon, Jan 30, 2023 at 10:16:25AM -0500, Dan Cross wrote: > > On Mon, Jan 30, 2023 at 10:02 AM Larry McVoy <lm@mcvoy.com> wrote: > > > On Mon, Jan 30, 2023 at 07:09:55PM +1100, Rob Pike wrote: > > > > And then we have pttys, speaking of pitys. > > > > > > I'm not seeing how you do stuff like ssh into a remote system and have > > > job control, etc, work without some sort of tty. > > > > You don't. But perhaps that model isn't super great. > > > > There was no job control on plan9 and I can't say I ever missed it. If > > I needed another terminal, I just swept open another window. Job > > control, even remote access a la SSH (or telnet, or rlogin), are a bit > > of an historical accident. If, instead, my computing environment is > > the set of shared resources I've imported into my system, then I don't > > necessarily need something like that. The plan9 `cpu` command, for > > access to a remote CPU server, conceptually brought the CPU server to > > you, not the other way around. > > > > It was a very different model. > > $ vi foo.c > hack, hack > ^Z > $ make > test test test, broken > $ fg > > Yes, I could do that in 2 different terminals but that mode of working > is extremely useful, works when I don't have a windowing system, I'm > on the console. > > Not having that model is a deal breaker for me, and I suspect a non > trivial number of other people. Yup. That's a way to work, and if you work with Unix, it's a common one. Plan 9 was different, and a lot of people who were familiar with Unix didn't like that, and were not interested in trying out a different way if it meant that they couldn't bring their existing mental models and workflows into the new environment unchanged. There was no `vi` on plan9, either; well, there was, but it was a MIPS emulator, not a text editor. But with the `acme` editor, your active text editor panes and your terminal window were all part of the editor itself: https://www.youtube.com/watch?v=dP1xVpMPn8M At one point it struck me that Plan 9 didn't succeed as a widespread replacement for Unix/Linux because it was bad or incapable, but rather, because people wanted Linux, and not plan9. - Dan C.
On Mon, Jan 30, 2023 at 10:35:25AM -0500, Dan Cross wrote:
> Plan 9 was different, and a lot of people who were familiar with Unix
> didn't like that, and were not interested in trying out a different
> way if it meant that they couldn't bring their existing mental models
> and workflows into the new environment unchanged.
>
> At one point it struck me that Plan 9 didn't succeed as a widespread
> replacement for Unix/Linux because it was bad or incapable, but
> rather, because people wanted Linux, and not plan9.
Many people make that mistake. New stuff instead of extend old stuff.
Look at programming languages for instance. We had C, it was pretty
simple to understand, but people wanted more stuff. So now we have
things like Rust that is pretty much completely different. Could we
not have extended C to do what Rust does? Why do we need an entirely
different syntax to say the same things?
Seems like Plan 9 fell into that trap. When you invalidate all of the
existing knowledge that people have, that creates a barrier to entry.
As you said, people don't want to give up their mental model when that
model works. They'll only give it up when there is at least a factor
of 2 improvement that they care about. These days it feels like people
are stuck enough that they want a factor of 10.
> On Jan 30, 2023, at 7:18 AM, Dan Cross <crossd@gmail.com> wrote:
>
> There was no job control on plan9 and I can't say I ever missed it.
Just yesterday I realized running two “make -j 8” in parallel was making
them both go real slow so I stopped one of them with ^Z and continued
it once the other make finished. This use case can’t be handled with more
windows.
On Mon, Jan 30, 2023 at 08:03:42AM -0800, Bakul Shah wrote: > > > > On Jan 30, 2023, at 7:18 AM, Dan Cross <crossd@gmail.com> wrote: > > > > There was no job control on plan9 and I can't say I ever missed it. > > Just yesterday I realized running two ???make -j 8??? in parallel was making > them both go real slow so I stopped one of them with ^Z and continued > it once the other make finished. This use case can???t be handled with more > windows. If you don't have job control, do you not have ^C to kill one of the make jobs? I guess you open another window and ps and kill it but if the machine is already thrashing that is gonna be unpleasant. -- --- Larry McVoy Retired to fishing http://www.mcvoy.com/lm/boat
On Mon, Jan 30, 2023 at 10:45 AM Larry McVoy <lm@mcvoy.com> wrote: > On Mon, Jan 30, 2023 at 10:35:25AM -0500, Dan Cross wrote: > > Plan 9 was different, and a lot of people who were familiar with Unix > > didn't like that, and were not interested in trying out a different > > way if it meant that they couldn't bring their existing mental models > > and workflows into the new environment unchanged. > > > > At one point it struck me that Plan 9 didn't succeed as a widespread > > replacement for Unix/Linux because it was bad or incapable, but > > rather, because people wanted Linux, and not plan9. > > Many people make that mistake. New stuff instead of extend old stuff. Some would argue that's not a mistake. How else do we innovate if we're just incrementally polishing what's come before? Computing now is radically different than when I started, but if I'd listened to my betters back then, it wouldn't have changed all that much. Actually I _did_ listen to my betters, and now I'm sad that I missed out on a lot of exciting things. > Look at programming languages for instance. We had C, it was pretty > simple to understand, but people wanted more stuff. I think C is a language that people _think_ is simple to understand, and perhaps _was_ simple to understand, but is actually remarkably subtle and a _lot_ of people don't actually have a great handle on how it really works anymore. Particularly now, when the compiler people seem to be prizing optimization above all else and so even obvious behavior that's technically "undefined" results in unexpected behavior (e.g., `if (a > 0 && b > 0 && a*b) < 0) overflow(); // signed integer overflow is UB`. Maybe sadly, C hasn't been a portable macro assembler for decades now. > So now we have > things like Rust that is pretty much completely different. Could we > not have extended C to do what Rust does? Why do we need an entirely > different syntax to say the same things? People tried to extend C to do the things that Rust does and it didn't work. > Seems like Plan 9 fell into that trap. When you invalidate all of the > existing knowledge that people have, that creates a barrier to entry. Plan 9, as a research system, was an experiment in doing things differently. As a research system, it was remarkably influential: a lot of the ideas made it into e.g. Linux. Imitation is the most sincere form of flattery. As a production system, people just wanted Linux. There was a time when people wanted to try out new ideas; oh well. > As you said, people don't want to give up their mental model when that > model works. They'll only give it up when there is at least a factor > of 2 improvement that they care about. These days it feels like people > are stuck enough that they want a factor of 10. Yup, that's about right. The mainframe is still going strong, after all! - Dan C.
On Jan 30, 2023, at 8:07 AM, Larry McVoy <lm@mcvoy.com> wrote:
>
> On Mon, Jan 30, 2023 at 08:03:42AM -0800, Bakul Shah wrote:
>>
>>
>>>> On Jan 30, 2023, at 7:18 AM, Dan Cross <crossd@gmail.com> wrote:
>>>
>>> There was no job control on plan9 and I can't say I ever missed it.
>>
>> Just yesterday I realized running two ???make -j 8??? in parallel was making
>> them both go real slow so I stopped one of them with ^Z and continued
>> it once the other make finished. This use case can???t be handled with more
>> windows.
>
> If you don't have job control, do you not have ^C to kill one of the
> make jobs? I guess you open another window and ps and kill it but
> if the machine is already thrashing that is gonna be unpleasant.
Yes but then you lose a bunch of work. You don’t want to restart,
you just want to delay.
On Mon, Jan 30, 2023 at 11:07 AM Larry McVoy <lm@mcvoy.com> wrote:
> On Mon, Jan 30, 2023 at 08:03:42AM -0800, Bakul Shah wrote:
> > > On Jan 30, 2023, at 7:18 AM, Dan Cross <crossd@gmail.com> wrote:
> > > There was no job control on plan9 and I can't say I ever missed it.
> >
> > Just yesterday I realized running two ???make -j 8??? in parallel was making
> > them both go real slow so I stopped one of them with ^Z and continued
> > it once the other make finished. This use case can???t be handled with more
> > windows.
>
> If you don't have job control, do you not have ^C to kill one of the
> make jobs? I guess you open another window and ps and kill it but
> if the machine is already thrashing that is gonna be unpleasant.
You do, but it's not a kernel-level thing; there's no TTY abstraction
with an attendant driver. Instead, the window system will interpret
DEL as an interrupt request and send a note to the process group
running in that window.
Plan 9 didn't have signals; rather, there were "notes", that were
little strings that could be sent between processes. It was the
responsibility of the receiver to parse these and take appropriate
action in a note handler; there was a "stop" note that could be used
to pause a process. One could presumably use that to pause a process
group that was using excessive CPU.
- Dan C.
On Mon, Jan 30, 2023 at 11:09:03AM -0500, Dan Cross wrote:
> On Mon, Jan 30, 2023 at 10:45 AM Larry McVoy <lm@mcvoy.com> wrote:
> > On Mon, Jan 30, 2023 at 10:35:25AM -0500, Dan Cross wrote:
> > > Plan 9 was different, and a lot of people who were familiar with Unix
> > > didn't like that, and were not interested in trying out a different
> > > way if it meant that they couldn't bring their existing mental models
> > > and workflows into the new environment unchanged.
> > >
> > > At one point it struck me that Plan 9 didn't succeed as a widespread
> > > replacement for Unix/Linux because it was bad or incapable, but
> > > rather, because people wanted Linux, and not plan9.
> >
> > Many people make that mistake. New stuff instead of extend old stuff.
>
> Some would argue that's not a mistake. How else do we innovate if
> we're just incrementally polishing what's come before?
I didn't say limit yourself to polishing, I said try and not invalidate
people's knowledge while innovating.
Too many people go down the path of doing things very differently and
they rationalize that they have to do it that way to innovate. That's
fine but it means it is going to be harder to get people to try your
new stuff.
The point I'm trying to make is that "different" is a higher barrier,
much, much higher, than "extend". People frequently ignore that and
that means other people ignore their work.
It is what it is, I doubt I'll convice anyone so I'll drop it.
Hi Bakul,
> > There was no job control on plan9 and I can't say I ever missed it.
>
> Just yesterday I realized running two “make -j 8” in parallel was
> making them both go real slow so I stopped one of them with ^Z and
> continued it once the other make finished. This use case can’t be
> handled with more windows.
If I lacked job control in that situation, I'd either SIGSTOP one of the
two with kill(1) for a later SIGCONT or renice it to hardly get a look
in until its peer had finished. (The TTY's susp character for job
control is SIGTSTP.)
--
Cheers, Ralph.
On Mon, 30 Jan 2023, Dan Cross wrote: > I think C is a language that people _think_ is simple to understand, > and perhaps _was_ simple to understand, but is actually remarkably > subtle and a _lot_ of people don't actually have a great handle on how > it really works anymore. Particularly now, when the compiler people > seem to be prizing optimization above all else and so even obvious > behavior that's technically "undefined" results in unexpected behavior > (e.g., `if (a > 0 && b > 0 && a*b) < 0) overflow(); // signed integer > overflow is UB`. Maybe sadly, C hasn't been a portable macro assembler > for decades now. I always assume compiler braindeath. Always use parentheses; never assume any specific order of precedence. I've actually been bitten by compilers doing unexpectedly stupid things, so I guard around them. I'll never merge two chars into a short with c=(a<<8)|b; I will ALWAYS do this: c=a; c<<=8; c|=b; because 16-bit MS-DOS compilers will ALWAYS manage to blunder that, though 32-bit compilers get it right (the sane thing to do would be to treat it as a word bitshift, but the compiler treats it as a byteshift because a is uint8_t). I'll never do if (a==b&&c==d), always if ((a==b)&&(c==d)). I always try to assume the compiler will wreck my code in hilariously braindamaged ways and code so precisely that it cannot do so. <snip> > Imitation is the most sincere form of flattery. The actual saying continues "...that mediocrity can make to greatness", which changes the meaning a bit. ;p -uso.
[-- Attachment #1: Type: text/plain, Size: 298 bytes --] On Mon, 30 Jan 2023, Bakul Shah wrote: > Yes but then you lose a bunch of work. You don’t want to restart, > you just want to delay. If it restarts, then there's something very wrong with your Makefile. make is designed to pick up where it left off if it dies before finishing. -uso.
On Mon, Jan 30, 2023 at 11:21:45AM -0500, Steve Nickolas wrote:
> I'll never do if (a==b&&c==d), always if ((a==b)&&(c==d)).
I'm a
if ((a == b) && (c == d))
guy but yeah, spell it out not just so the compiler doesn't mess with
you, for me, it is more so my eyes run over the expression and I can
parse it with less brain cells, parens let me think less.
I've always had the opinion that code should be written in a way that
is the most easy to read.
[-- Attachment #1: Type: text/plain, Size: 724 bytes --] This is no longer about FD2, maybe it's time for a source quench on the thread? I keep expecting new messages on FD2 and somehow we're arguing about C and make again :-) On Mon, Jan 30, 2023 at 8:27 AM Larry McVoy <lm@mcvoy.com> wrote: > On Mon, Jan 30, 2023 at 11:21:45AM -0500, Steve Nickolas wrote: > > I'll never do if (a==b&&c==d), always if ((a==b)&&(c==d)). > > I'm a > > if ((a == b) && (c == d)) > > guy but yeah, spell it out not just so the compiler doesn't mess with > you, for me, it is more so my eyes run over the expression and I can > parse it with less brain cells, parens let me think less. > > I've always had the opinion that code should be written in a way that > is the most easy to read. > [-- Attachment #2: Type: text/html, Size: 1092 bytes --]
[-- Attachment #1: Type: text/plain, Size: 364 bytes --] On Mon, Jan 30, 2023 at 11:27 AM Larry McVoy <lm@mcvoy.com> wrote: > I've always had the opinion that code should be written in a way that > is the most easy to read. > Amen -- using the principle of '*least obscure*' or obtuse - being obvious is thinking about a quick and casual human reader of code, not a compiler someone taking intense notes. ᐧ [-- Attachment #2: Type: text/html, Size: 1139 bytes --]
On Jan 30, 2023, at 8:19 AM, Ralph Corderoy <ralph@inputplus.co.uk> wrote:
>
> Hi Bakul,
>
>>> There was no job control on plan9 and I can't say I ever missed it.
>>
>> Just yesterday I realized running two “make -j 8” in parallel was
>> making them both go real slow so I stopped one of them with ^Z and
>> continued it once the other make finished. This use case can’t be
>> handled with more windows.
>
> If I lacked job control in that situation, I'd either SIGSTOP one of the
> two with kill(1) for a later SIGCONT or renice it to hardly get a look
> in until its peer had finished. (The TTY's susp character for job
> control is SIGTSTP.)
Unfortunately renice command can only be used by mere users
to *increase* niceness, not decrease. Agreed that you don’t need
a tty discipline for SIGSTOP but it is quite convenient! Note that even
plan9 allows killing a process by hitting a single key but I suppose
that is done in a different way.
Steve Nickolas made a comment about how rerunning make should
simply continue from where it left off. Unfortunately many do not.
A typical silent error is an output file is created but the program is
killed before it can finish the job. The next time around this program
won’t run since the output already exists! Or you have a “make world”
situation where everything is removed before building anything!
On Mon, Jan 30, 2023 at 11:09:03AM -0500, Dan Cross wrote: > > > At one point it struck me that Plan 9 didn't succeed as a widespread > > > replacement for Unix/Linux because it was bad or incapable, but > > > rather, because people wanted Linux, and not plan9. > > > > Many people make that mistake. New stuff instead of extend old stuff. > > Some would argue that's not a mistake. How else do we innovate if > we're just incrementally polishing what's come before? The challenge is how do you extend the most common interfaces so that you don't break the most common workflows and applications, without adding too much backwards compatibility hair, and without stiffling innovation. I think we can err in both directions --- break the world, and no one will adopt the OS, and too much backwars compaibility and it's much, much harder to innovate. The happy medium which Linux uses is that the userspace interfaces should mostly stay compatible ("thou shoult not break userspace") unless there are really, really good reasons (such as security vulnerabilities) when you can't really do that. But internal interfaces, including those that might be used by out of three kernel modules --- it's totally fair game, and if you have academics who have professors who were trained up on the 4.14 kernel, when they were in graduate school --- sorry, your knowledge is out of date, and you can either force your graduae students to use an obsolescent kernel for their research, or you're going to have to get with the times. (I remember in the early days of BSD where people considered it a feature that academic research tools wouldn't be broken; I believe I remember hearing that as excuses not to rework with networking and tty layers, whereas Linux rewrite the networking stack from scratch 2 or 3 times, and we've since completely reworked the block layer to deal with ultra-fast devices, even if that meant that all external device drivers would be broken.) Of course, there will be kvetching no matter where you draw the line, but simply saying, to *hell* with the old ways of doing things, and we can't break anything, even internal interfaces, are both recipes for failure IMHO. > > As you said, people don't want to give up their mental model when that > > model works. They'll only give it up when there is at least a factor > > of 2 improvement that they care about. These days it feels like people > > are stuck enough that they want a factor of 10. > > Yup, that's about right. The mainframe is still going strong, after all! One of the things that we can at $WORK is to be able to translate storage TCO costs (cost per byte, cost per IOPS), cost of CPU utilization, and cost of memory, etc., into SWE-equivalents. So when we propose a new initiative, we'll take the cost to implement the project, as well as the cost to change all of the downstream users in the software/storage stack, and do a cost benefit comparison. For example, we might take the cost to implement the use of some new storage technique, such as Hybrid SMR[1], dd a generous buffer because projects always take longer and cost more than you first might think, and then compare it to the 5 year storage TCO costs, using SWE-equivalents as the common unit of comparison. And the project will only get green-lighted if the benefits exceed the costs by a goodly margin. [1] https://blog.google/products/google-cloud/dynamic-hybrid-smr-ocp-proposal-improve-data-center-disk-drives/ Of course, things are different if you are working in academia, where things like getting published in a venue that will help build you or your associates' tenure case are going to be more important. In any case, if we are going to claim that programming is an engineering discpline, then engineering tradeoffs are important to remember. Companies or departments who forget this lesson are much more likely to get reminded of it when the activist investors start pushing for layoffs, or when you start wonderng why your company had to get sold off to the highest bidder... - Ted
[-- Attachment #1: Type: text/plain, Size: 3065 bytes --] On Monday, January 30, 2023, Dan Cross <crossd@gmail.com> wrote: > On Mon, Jan 30, 2023 at 10:45 AM Larry McVoy <lm@mcvoy.com> wrote: > > On Mon, Jan 30, 2023 at 10:35:25AM -0500, Dan Cross wrote: > > > Plan 9 was different, and a lot of people who were familiar with Unix > > > didn't like that, and were not interested in trying out a different > > > way if it meant that they couldn't bring their existing mental models > > > and workflows into the new environment unchanged. > > > > > > At one point it struck me that Plan 9 didn't succeed as a widespread > > > replacement for Unix/Linux because it was bad or incapable, but > > > rather, because people wanted Linux, and not plan9. > > > > Many people make that mistake. New stuff instead of extend old stuff. > > Some would argue that's not a mistake. How else do we innovate if > we're just incrementally polishing what's come before? I would argue that Linux actually did a lot of things differently. It tried to conform to POSIX, but still there were a lof of fresh ideas that actually took off. It was not possible in the free BSD world which inherited much more from the old Unix world. > > So now we have > > things like Rust that is pretty much completely different. Could we > > not have extended C to do what Rust does? Why do we need an entirely > > different syntax to say the same things? > > People tried to extend C to do the things that Rust does and it didn't > work. Smells like C++ to me. Rust in essence is a re-implementation of C++ not C. It tries to pack as much features as it possibly can. I don't know of any other language that throughout the years remained as pure and minimal as C. (maybe Forth). > > > Seems like Plan 9 fell into that trap. When you invalidate all of the > > existing knowledge that people have, that creates a barrier to entry. > > Plan 9, as a research system, was an experiment in doing things > differently. As a research system, it was remarkably influential: a > lot of the ideas made it into e.g. Linux. Imitation is the most > sincere form of flattery. As a production system, people just wanted > Linux. There was a time when people wanted to try out new ideas; oh > well. Linux came out in the right place at the right time, right around the time when the Internet really became a cyberspace spanning the whole globe. Finland was first connected to the Internet in 1989. Linus bought his first 386DX33 in January 1991. To me Linux represented a revolution in computing. It built on the shoulders of Unix forefathers but at the same time was a breath of fresh air in the Unix space. Young people at the time wanted that. That's why it became so wildly popular. It was a completely free, idealistic worldwide movement. It brought together a diverse group of people: university Unix programmers, home computer enthusiasts and demoscene hackers who just recently replaced their 8-bit C64's and Atari's with fresh 386-based PCs, young security hackers who watched too much War Games, etc. It was a very fresh movement at the time. --Andy [-- Attachment #2: Type: text/html, Size: 3893 bytes --]
[-- Attachment #1: Type: text/plain, Size: 2157 bytes --] On Mon, Jan 30, 2023 at 9:59 AM Andy Kosela <akosela@andykosela.com> wrote: > On Monday, January 30, 2023, Dan Cross <crossd@gmail.com> wrote: > >> On Mon, Jan 30, 2023 at 10:45 AM Larry McVoy <lm@mcvoy.com> wrote: >> > On Mon, Jan 30, 2023 at 10:35:25AM -0500, Dan Cross wrote: >> > > Plan 9 was different, and a lot of people who were familiar with Unix >> > > didn't like that, and were not interested in trying out a different >> > > way if it meant that they couldn't bring their existing mental models >> > > and workflows into the new environment unchanged. >> > > >> > > At one point it struck me that Plan 9 didn't succeed as a widespread >> > > replacement for Unix/Linux because it was bad or incapable, but >> > > rather, because people wanted Linux, and not plan9. >> > >> > Many people make that mistake. New stuff instead of extend old stuff. >> >> Some would argue that's not a mistake. How else do we innovate if >> we're just incrementally polishing what's come before? > > > I would argue that Linux actually did a lot of things differently. It > tried to conform to POSIX, but still there were a lof of fresh ideas that > actually took off. > Yes, but one legacy of that was Linux tried to use the System V ABI everywhere with extensions, and that means errno values are different in linux for different platforms, signals are a bit different etc. > It was not possible in the free BSD world which inherited much more from > the old Unix world. > It's been totally possible in the BSD world. The vm systems have been redone in ways that make the original look different, the tty layers are now completely different (something bde couldn't accomplish in the early days), the autoconf / device probing/attaching is different, the fact that removable devices extend well beyond disk packs, SMP support (several different flavors), bus independent dma and device register access, etc. Now granted, in the earliest of days some things seemed too sacrosanct, but all the BSDs quickly learned being too rigid didn't generally have good outcomes (though some corners of the systems have taken longer than others to realize that). Warner [-- Attachment #2: Type: text/html, Size: 3092 bytes --]
On Mon, Jan 30, 2023 at 11:18 AM Larry McVoy <lm@mcvoy.com> wrote:
> On Mon, Jan 30, 2023 at 11:09:03AM -0500, Dan Cross wrote:
> > On Mon, Jan 30, 2023 at 10:45 AM Larry McVoy <lm@mcvoy.com> wrote:
> > > On Mon, Jan 30, 2023 at 10:35:25AM -0500, Dan Cross wrote:
> > > > Plan 9 was different, and a lot of people who were familiar with Unix
> > > > didn't like that, and were not interested in trying out a different
> > > > way if it meant that they couldn't bring their existing mental models
> > > > and workflows into the new environment unchanged.
> > > >
> > > > At one point it struck me that Plan 9 didn't succeed as a widespread
> > > > replacement for Unix/Linux because it was bad or incapable, but
> > > > rather, because people wanted Linux, and not plan9.
> > >
> > > Many people make that mistake. New stuff instead of extend old stuff.
> >
> > Some would argue that's not a mistake. How else do we innovate if
> > we're just incrementally polishing what's come before?
>
> I didn't say limit yourself to polishing, I said try and not invalidate
> people's knowledge while innovating.
>
> Too many people go down the path of doing things very differently and
> they rationalize that they have to do it that way to innovate. That's
> fine but it means it is going to be harder to get people to try your
> new stuff.
>
> The point I'm trying to make is that "different" is a higher barrier,
> much, much higher, than "extend". People frequently ignore that and
> that means other people ignore their work.
>
> It is what it is, I doubt I'll convice anyone so I'll drop it.
Oh, I don't know. I think it's actually kind of important to see _why_
people didn't want to look deeper into plan9 (for example). The system
had a lot to offer, but you had to dig a bit to get into it; a lot of
folks never got that far. If it was really lack of job control, then
that's a shame.
- Dan C.
On Mon, Jan 30, 2023 at 11:19 AM Ralph Corderoy <ralph@inputplus.co.uk> wrote:
> > > There was no job control on plan9 and I can't say I ever missed it.
> >
> > Just yesterday I realized running two “make -j 8” in parallel was
> > making them both go real slow so I stopped one of them with ^Z and
> > continued it once the other make finished. This use case can’t be
> > handled with more windows.
>
> If I lacked job control in that situation, I'd either SIGSTOP one of the
> two with kill(1) for a later SIGCONT or renice it to hardly get a look
> in until its peer had finished. (The TTY's susp character for job
> control is SIGTSTP.)
I think that Bakul's point was that there wasn't an analogue for
SIGTSTP in plan 9. You _could_ drop a `stop` into a process's
/proc/$pid/note, which was the Unix equivalent of sending `SIGSTOP`
(and I was a bit wrong earlier; that was handled by the proc driver,
not the process). I don't believe there was a binding for that in the
window system, though.
It's funny, as I thought about it a bit more, the existence of the
window system model sort of proves that there's nothing intrinsic
about how it worked that would preclude one writing, basically, a
terminal-driver in user space. No one was ever motivated to do so, but
I don't see why one couldn't have implemented job control as a
userspace primitive building on top of the existing proc/note
machinery.
- Dan C.
I have my own theory on why Plan 9 never caught on. It was never productized.
It was a modern system arriving right at the post modern age emerged so it
didn't resonate with the folks who were doing things. It was too early to
be recognized as a cloud operating system, which is what it really is.
I say "is" because we actually use it and have done so since 1995. We still
ship appliances based on it. We do all our development on it. I have dumps
on our file server (Ken's) going back to Jun 22, 2004.
I can't really know for sure, but I think those are the three reasons:
never made a product, modern not post modern, too early for the cloud.
Brantley Coile
> On Jan 30, 2023, at 2:03 PM, Dan Cross <crossd@gmail.com> wrote:
>
> On Mon, Jan 30, 2023 at 11:18 AM Larry McVoy <lm@mcvoy.com> wrote:
>> On Mon, Jan 30, 2023 at 11:09:03AM -0500, Dan Cross wrote:
>>> On Mon, Jan 30, 2023 at 10:45 AM Larry McVoy <lm@mcvoy.com> wrote:
>>>> On Mon, Jan 30, 2023 at 10:35:25AM -0500, Dan Cross wrote:
>>>>> Plan 9 was different, and a lot of people who were familiar with Unix
>>>>> didn't like that, and were not interested in trying out a different
>>>>> way if it meant that they couldn't bring their existing mental models
>>>>> and workflows into the new environment unchanged.
>>>>>
>>>>> At one point it struck me that Plan 9 didn't succeed as a widespread
>>>>> replacement for Unix/Linux because it was bad or incapable, but
>>>>> rather, because people wanted Linux, and not plan9.
>>>>
>>>> Many people make that mistake. New stuff instead of extend old stuff.
>>>
>>> Some would argue that's not a mistake. How else do we innovate if
>>> we're just incrementally polishing what's come before?
>>
>> I didn't say limit yourself to polishing, I said try and not invalidate
>> people's knowledge while innovating.
>>
>> Too many people go down the path of doing things very differently and
>> they rationalize that they have to do it that way to innovate. That's
>> fine but it means it is going to be harder to get people to try your
>> new stuff.
>>
>> The point I'm trying to make is that "different" is a higher barrier,
>> much, much higher, than "extend". People frequently ignore that and
>> that means other people ignore their work.
>>
>> It is what it is, I doubt I'll convice anyone so I'll drop it.
>
> Oh, I don't know. I think it's actually kind of important to see _why_
> people didn't want to look deeper into plan9 (for example). The system
> had a lot to offer, but you had to dig a bit to get into it; a lot of
> folks never got that far. If it was really lack of job control, then
> that's a shame.
>
> - Dan C.
Ha, when I was doing a design verification stint, I caught a piece of verilog that depended on an incorrect view of order of operations and saved the corp a new mask, so completely agree. Infix notation is just a bad idea.
> On Jan 30, 2023, at 11:27 AM, Larry McVoy <lm@mcvoy.com> wrote:
>
> On Mon, Jan 30, 2023 at 11:21:45AM -0500, Steve Nickolas wrote:
>> I'll never do if (a==b&&c==d), always if ((a==b)&&(c==d)).
>
> I'm a
>
> if ((a == b) && (c == d))
>
> guy but yeah, spell it out not just so the compiler doesn't mess with
> you, for me, it is more so my eyes run over the expression and I can
> parse it with less brain cells, parens let me think less.
>
> I've always had the opinion that code should be written in a way that
> is the most easy to read.
On Mon, Jan 30, 2023 at 10:04:46AM -0700, Warner Losh wrote: > > Yes, but one legacy of that was Linux tried to use the System V ABI > everywhere with extensions, and that means errno values are different in > linux for different platforms, signals are a bit different etc. Linux used the *Minix* ABI for 32-bit x86, so that Minix 1.x binaries would run on Linux until recently (when a.out support was removed). For some of the first original ports (e.g., Alpha and Sparc) there was an attempt to maintain partial ABI compatibility with OSF/1 and SunOS, mainly so that proprietary binaries (mostly Netscape) could run on those Linux ports. However, for pretty much all of the newer architectures, the system call and signal numbers used are inherited from the x86 and x86_64 syscall and signal numbers. After Netscape passed, the demand for running non-native binaries rapidly declined. What you may be remembering was that Linux used to have an emulation layer for Intel Binary Compatibility Standard (iBCS2)[1]. However, this was not native support; Linux used "int 0x80" for xx86 system call with a.out (and later ELF) binaries, while iBCS2 mandated use of "lcall 7,0" syscalls and COFF binaries. So iBCS2 support was an optional thing that you had to explicitly configure the kernel to support, and while it was quite complete, it was not Linux's native userspace interface. [1] https://www.linux.co.cr/free-unix-os/review/acrobat/940612.pdf Story time: at one point in the early 90's, MIT purchased a site license for a proprietary spreadsheet product for ISC Unix, which we planned to run on Linux systems at MIT using iBCS2. When I contacted Michael Johnson[2], the engineer at that company to implement support for the site license ---- which was going to rely on checking the IP address on the system to see if it was 18.x.y.z --- I found out that Michael built and tested the spreadsheet using Linux with iBCS2 emulation, and only *later* made sure it would actually run on ISC Unix, because Linux was a much more pleasant development environment than the System V alternatives. :-) [2] Michael later was one of the founders of Red Hat. In any case, while in practice some platforms use a unique set of code points for system calls and signal numbers (thos are mostly the older architectures which exist mostly for historic interest), most platforms which run Linux today actually use a consistent set of system call and signal numbers. That being said, we did implement many of the System V extensions, mostly because there was demand from the proprietary software vendors who needed to be convinced to port their applications from Solars, HPUX, AIX, etc., to Linux. Mercifully, Linux never implemented Streams (thank $DEITY Oracle didn't make that a requirement :-) and we did have BSD job control from the earliest days. That happened because in early 1992, I considered Job Control to be a highly important feature that Linux **had** to have, and so I sat down with the POSIX.1 specification, and implemented job control from the spec. The reason why Linux's tty layer has a bit of a System V "flavor" at least from a system call perspective has nothing to do the system call ABI, and more to do with the fact that apparently representatives from System V derivatives seemed to have a dominant role on the POSIX committee. - Ted
[-- Attachment #1: Type: text/plain, Size: 4919 bytes --] On Mon, Jan 30, 2023 at 1:39 PM Theodore Ts'o <tytso@mit.edu> wrote: > On Mon, Jan 30, 2023 at 10:04:46AM -0700, Warner Losh wrote: > > > > Yes, but one legacy of that was Linux tried to use the System V ABI > > everywhere with extensions, and that means errno values are different in > > linux for different platforms, signals are a bit different etc. > > Linux used the *Minix* ABI for 32-bit x86, so that Minix 1.x binaries > would run on Linux until recently (when a.out support was removed). > For some of the first original ports (e.g., Alpha and Sparc) there was > an attempt to maintain partial ABI compatibility with OSF/1 and SunOS, > mainly so that proprietary binaries (mostly Netscape) could run on > those Linux ports. However, for pretty much all of the newer > architectures, the system call and signal numbers used are inherited > from the x86 and x86_64 syscall and signal numbers. After Netscape > passed, the demand for running non-native binaries rapidly declined. > > What you may be remembering was that Linux used to have an emulation > layer for Intel Binary Compatibility Standard (iBCS2)[1]. However, > this was not native support; Linux used "int 0x80" for xx86 system > call with a.out (and later ELF) binaries, while iBCS2 mandated use of > "lcall 7,0" syscalls and COFF binaries. So iBCS2 support was an > optional thing that you had to explicitly configure the kernel to > support, and while it was quite complete, it was not Linux's native > userspace interface. > So I've written a libc-like library for Linux ABI for my kexec the FreeBSD kernel project which produces a Linux binary inside the FreeBSD build system. I target powerpc64, arm64 and aarch64 (with plans for armv7 maybe and riscv64 for sure). They all have different system call numbers, slightly different system call encodings and the ERRNOs are different > 34, with not all the errnos defined on all five of these architectures. There's also variation in how the different structures one passes to system calls are laid out such that I need to have different arch .h files to describe them (cf stat, dirent, termios). There are generic versions, but even this small selection of architectures has departures for some or all of them from the generics. All of that is from this year, using only ELF binaries and trying like heck to avoid the oldest, legacy system calls and using only the newer ones that aren't crazy to use. Of course, that's all "chump change" in this project since the evolved kexec interface on Linux is somewhat different between architectures, involves many special magic things you just have to know to use, and is rather Linux centric... And also the evolved FreeBSD per-arch boot interface, which has all those things, but with somewhat different details and somewhat FreeBSD centric... Warner > [1] https://www.linux.co.cr/free-unix-os/review/acrobat/940612.pdf > > Story time: at one point in the early 90's, MIT purchased a site > license for a proprietary spreadsheet product for ISC Unix, which we > planned to run on Linux systems at MIT using iBCS2. When I contacted > Michael Johnson[2], the engineer at that company to implement support > for the site license ---- which was going to rely on checking the IP > address on the system to see if it was 18.x.y.z --- I found out that > Michael built and tested the spreadsheet using Linux with iBCS2 > emulation, and only *later* made sure it would actually run on ISC > Unix, because Linux was a much more pleasant development environment > than the System V alternatives. :-) > > [2] Michael later was one of the founders of Red Hat. > > In any case, while in practice some platforms use a unique set of code > points for system calls and signal numbers (thos are mostly the older > architectures which exist mostly for historic interest), most > platforms which run Linux today actually use a consistent set of > system call and signal numbers. > > That being said, we did implement many of the System V extensions, > mostly because there was demand from the proprietary software vendors > who needed to be convinced to port their applications from Solars, > HPUX, AIX, etc., to Linux. > > Mercifully, Linux never implemented Streams (thank $DEITY Oracle > didn't make that a requirement :-) and we did have BSD job control > from the earliest days. That happened because in early 1992, I > considered Job Control to be a highly important feature that Linux > **had** to have, and so I sat down with the POSIX.1 specification, and > implemented job control from the spec. The reason why Linux's tty > layer has a bit of a System V "flavor" at least from a system call > perspective has nothing to do the system call ABI, and more to do with > the fact that apparently representatives from System V derivatives > seemed to have a dominant role on the POSIX committee. > > - Ted > [-- Attachment #2: Type: text/html, Size: 6043 bytes --]
[-- Attachment #1: Type: text/plain, Size: 1007 bytes --] On Mon, Jan 30, 2023 at 3:39 PM Theodore Ts'o <tytso@mit.edu> wrote: > apparently representatives from System V derivatives > seemed to have a dominant role on the POSIX committee. > I can personally attest (having been in the room during many of that those discussions) this is a true statement. The early versions (certainly thru .1 - the original system called Interface), AT&T team, and a number of vendors particularly the European ones - tried hard to push the SVID to be POSIX, although many of us had used a lot of BSD in our systems fought back. Can not say it was a fun time. The problem was everyone in the room wanted to be the big dog and to have all the food -- no one could really admit that we were all in it together and the real competition was elsewhere. FWIW: Keith Bostic occasionally came to some of those meetings from CSRG to some of those meetings and was amazingly gracious/patient. It's why 4.4 had a number of things in that made it closer to POSIX. ᐧ [-- Attachment #2: Type: text/html, Size: 2142 bytes --]
On Mon, Jan 30, 2023 at 02:03:32PM -0500, Dan Cross wrote: > On Mon, Jan 30, 2023 at 11:18 AM Larry McVoy <lm@mcvoy.com> wrote: > > On Mon, Jan 30, 2023 at 11:09:03AM -0500, Dan Cross wrote: > > > On Mon, Jan 30, 2023 at 10:45 AM Larry McVoy <lm@mcvoy.com> wrote: > > > > On Mon, Jan 30, 2023 at 10:35:25AM -0500, Dan Cross wrote: > > > > > Plan 9 was different, and a lot of people who were familiar with Unix > > > > > didn't like that, and were not interested in trying out a different > > > > > way if it meant that they couldn't bring their existing mental models > > > > > and workflows into the new environment unchanged. > > > > > > > > > > At one point it struck me that Plan 9 didn't succeed as a widespread > > > > > replacement for Unix/Linux because it was bad or incapable, but > > > > > rather, because people wanted Linux, and not plan9. > > > > > > > > Many people make that mistake. New stuff instead of extend old stuff. > > > > > > Some would argue that's not a mistake. How else do we innovate if > > > we're just incrementally polishing what's come before? > > > > I didn't say limit yourself to polishing, I said try and not invalidate > > people's knowledge while innovating. > > > > Too many people go down the path of doing things very differently and > > they rationalize that they have to do it that way to innovate. That's > > fine but it means it is going to be harder to get people to try your > > new stuff. > > > > The point I'm trying to make is that "different" is a higher barrier, > > much, much higher, than "extend". People frequently ignore that and > > that means other people ignore their work. > > > > It is what it is, I doubt I'll convice anyone so I'll drop it. > > Oh, I don't know. I think it's actually kind of important to see _why_ > people didn't want to look deeper into plan9 (for example). The system > had a lot to offer, but you had to dig a bit to get into it; a lot of > folks never got that far. If it was really lack of job control, then > that's a shame. It's certainly not just job control. I think it's a combo of being unfamiliar, no source (at first I believe) and Linux was already pretty far along. The lesson is that if there is an installed base, and you want people to move, you have to make that easy and there has to be a noticeable gain. Plan 9 sounded cool to me but Linux was easy. -- --- Larry McVoy Retired to fishing http://www.mcvoy.com/lm/boat
[-- Attachment #1: Type: text/plain, Size: 3595 bytes --] There was Plan 9 source available, but the early releases were in the AT&T Unix mode and required some payment or academic connection. The early demo disks might not have had source - I don't remember - but if not, there was simply no room on a floppy. The CD releases had full source. Plan 9 was a research system. It was hoped that maybe one day it would become a commercial success, but that was never the prime motivation. It only "failed" as a product, and there are many contributing factors there, including existing systems that were good enough, a desire for people to have "workstations" and ignore the benefits of a completing window UI on a mainframe (Cray was an exception, earlier), and AT&T lawyers refusing to think realistically about open source (about as polite a way I can express a multiyear fight that never ended, only fizzled into stalemate). As a research system, Plan 9 was a huge success. We're still talking about its ideas 30+ years on. -rob On Tue, Jan 31, 2023 at 8:24 AM Larry McVoy <lm@mcvoy.com> wrote: > On Mon, Jan 30, 2023 at 02:03:32PM -0500, Dan Cross wrote: > > On Mon, Jan 30, 2023 at 11:18 AM Larry McVoy <lm@mcvoy.com> wrote: > > > On Mon, Jan 30, 2023 at 11:09:03AM -0500, Dan Cross wrote: > > > > On Mon, Jan 30, 2023 at 10:45 AM Larry McVoy <lm@mcvoy.com> wrote: > > > > > On Mon, Jan 30, 2023 at 10:35:25AM -0500, Dan Cross wrote: > > > > > > Plan 9 was different, and a lot of people who were familiar with > Unix > > > > > > didn't like that, and were not interested in trying out a > different > > > > > > way if it meant that they couldn't bring their existing mental > models > > > > > > and workflows into the new environment unchanged. > > > > > > > > > > > > At one point it struck me that Plan 9 didn't succeed as a > widespread > > > > > > replacement for Unix/Linux because it was bad or incapable, but > > > > > > rather, because people wanted Linux, and not plan9. > > > > > > > > > > Many people make that mistake. New stuff instead of extend old > stuff. > > > > > > > > Some would argue that's not a mistake. How else do we innovate if > > > > we're just incrementally polishing what's come before? > > > > > > I didn't say limit yourself to polishing, I said try and not invalidate > > > people's knowledge while innovating. > > > > > > Too many people go down the path of doing things very differently and > > > they rationalize that they have to do it that way to innovate. That's > > > fine but it means it is going to be harder to get people to try your > > > new stuff. > > > > > > The point I'm trying to make is that "different" is a higher barrier, > > > much, much higher, than "extend". People frequently ignore that and > > > that means other people ignore their work. > > > > > > It is what it is, I doubt I'll convice anyone so I'll drop it. > > > > Oh, I don't know. I think it's actually kind of important to see _why_ > > people didn't want to look deeper into plan9 (for example). The system > > had a lot to offer, but you had to dig a bit to get into it; a lot of > > folks never got that far. If it was really lack of job control, then > > that's a shame. > > It's certainly not just job control. I think it's a combo of being > unfamiliar, no source (at first I believe) and Linux was already > pretty far along. > > The lesson is that if there is an installed base, and you want people > to move, you have to make that easy and there has to be a noticeable > gain. Plan 9 sounded cool to me but Linux was easy. > -- > --- > Larry McVoy Retired to fishing > http://www.mcvoy.com/lm/boat > [-- Attachment #2: Type: text/html, Size: 5218 bytes --]
[-- Attachment #1: Type: text/plain, Size: 4426 bytes --] It's a long list of ideas that came from Plan 9 into Unix. One of those wonderful plan 9 ideas, the cpu command, with all it implies, has new life: github.com/u-root/cpu. Several companies are using the cpu command as the basis of light weight VM and edge environments, for example. Also, rfork went into freebsd ca 1996, and it's still there. Al Viro, IIRC, put some stuff into his VFS ca 1998 that specifically came from plan 9, including private mounts. We used them at LANL to build private name spaces for cluster nodes, over 9p, as part of Clustermatic. Some core ideas of controlling subsystems via writable synthetics also made it in. And on and on. On Mon, Jan 30, 2023 at 2:17 PM Rob Pike <robpike@gmail.com> wrote: > There was Plan 9 source available, but the early releases were in the AT&T > Unix mode and required some payment or academic connection. The early demo > disks might not have had source - I don't remember - but if not, there was > simply no room on a floppy. The CD releases had full source. > > Plan 9 was a research system. It was hoped that maybe one day it would > become a commercial success, but that was never the prime motivation. It > only "failed" as a product, and there are many contributing factors there, > including existing systems that were good enough, a desire for people to > have "workstations" and ignore the benefits of a completing window UI on a > mainframe (Cray was an exception, earlier), and AT&T lawyers refusing to > think realistically about open source (about as polite a way I can express > a multiyear fight that never ended, only fizzled into stalemate). > > As a research system, Plan 9 was a huge success. We're still talking about > its ideas 30+ years on. > > -rob > > > On Tue, Jan 31, 2023 at 8:24 AM Larry McVoy <lm@mcvoy.com> wrote: > >> On Mon, Jan 30, 2023 at 02:03:32PM -0500, Dan Cross wrote: >> > On Mon, Jan 30, 2023 at 11:18 AM Larry McVoy <lm@mcvoy.com> wrote: >> > > On Mon, Jan 30, 2023 at 11:09:03AM -0500, Dan Cross wrote: >> > > > On Mon, Jan 30, 2023 at 10:45 AM Larry McVoy <lm@mcvoy.com> wrote: >> > > > > On Mon, Jan 30, 2023 at 10:35:25AM -0500, Dan Cross wrote: >> > > > > > Plan 9 was different, and a lot of people who were familiar >> with Unix >> > > > > > didn't like that, and were not interested in trying out a >> different >> > > > > > way if it meant that they couldn't bring their existing mental >> models >> > > > > > and workflows into the new environment unchanged. >> > > > > > >> > > > > > At one point it struck me that Plan 9 didn't succeed as a >> widespread >> > > > > > replacement for Unix/Linux because it was bad or incapable, but >> > > > > > rather, because people wanted Linux, and not plan9. >> > > > > >> > > > > Many people make that mistake. New stuff instead of extend old >> stuff. >> > > > >> > > > Some would argue that's not a mistake. How else do we innovate if >> > > > we're just incrementally polishing what's come before? >> > > >> > > I didn't say limit yourself to polishing, I said try and not >> invalidate >> > > people's knowledge while innovating. >> > > >> > > Too many people go down the path of doing things very differently and >> > > they rationalize that they have to do it that way to innovate. That's >> > > fine but it means it is going to be harder to get people to try your >> > > new stuff. >> > > >> > > The point I'm trying to make is that "different" is a higher barrier, >> > > much, much higher, than "extend". People frequently ignore that and >> > > that means other people ignore their work. >> > > >> > > It is what it is, I doubt I'll convice anyone so I'll drop it. >> > >> > Oh, I don't know. I think it's actually kind of important to see _why_ >> > people didn't want to look deeper into plan9 (for example). The system >> > had a lot to offer, but you had to dig a bit to get into it; a lot of >> > folks never got that far. If it was really lack of job control, then >> > that's a shame. >> >> It's certainly not just job control. I think it's a combo of being >> unfamiliar, no source (at first I believe) and Linux was already >> pretty far along. >> >> The lesson is that if there is an installed base, and you want people >> to move, you have to make that easy and there has to be a noticeable >> gain. Plan 9 sounded cool to me but Linux was easy. >> -- >> --- >> Larry McVoy Retired to fishing >> http://www.mcvoy.com/lm/boat >> > [-- Attachment #2: Type: text/html, Size: 6403 bytes --]
This wish is perhaps shared by no one else but I'd still love to have a system where the kernel has the clean
architectural lines of plan9 + more good stuff from it, and a Unixy API for many existing programs, perhaps as a shared lib. And I don't want all the heft of BSD or Linux kernels!
Now this may be quite impractical (like trying to make C as safe as Rust) but that is what I want! I just think there is a lot more here that can be explored further.
> On Jan 30, 2023, at 2:15 PM, Rob Pike <robpike@gmail.com> wrote:
>
> There was Plan 9 source available, but the early releases were in the AT&T Unix mode and required some payment or academic connection. The early demo disks might not have had source - I don't remember - but if not, there was simply no room on a floppy. The CD releases had full source.
>
> Plan 9 was a research system. It was hoped that maybe one day it would become a commercial success, but that was never the prime motivation. It only "failed" as a product, and there are many contributing factors there, including existing systems that were good enough, a desire for people to have "workstations" and ignore the benefits of a completing window UI on a mainframe (Cray was an exception, earlier), and AT&T lawyers refusing to think realistically about open source (about as polite a way I can express a multiyear fight that never ended, only fizzled into stalemate).
>
> As a research system, Plan 9 was a huge success. We're still talking about its ideas 30+ years on.
>
> -rob
>
>
> On Tue, Jan 31, 2023 at 8:24 AM Larry McVoy <lm@mcvoy.com> wrote:
> On Mon, Jan 30, 2023 at 02:03:32PM -0500, Dan Cross wrote:
> > On Mon, Jan 30, 2023 at 11:18 AM Larry McVoy <lm@mcvoy.com> wrote:
> > > On Mon, Jan 30, 2023 at 11:09:03AM -0500, Dan Cross wrote:
> > > > On Mon, Jan 30, 2023 at 10:45 AM Larry McVoy <lm@mcvoy.com> wrote:
> > > > > On Mon, Jan 30, 2023 at 10:35:25AM -0500, Dan Cross wrote:
> > > > > > Plan 9 was different, and a lot of people who were familiar with Unix
> > > > > > didn't like that, and were not interested in trying out a different
> > > > > > way if it meant that they couldn't bring their existing mental models
> > > > > > and workflows into the new environment unchanged.
> > > > > >
> > > > > > At one point it struck me that Plan 9 didn't succeed as a widespread
> > > > > > replacement for Unix/Linux because it was bad or incapable, but
> > > > > > rather, because people wanted Linux, and not plan9.
> > > > >
> > > > > Many people make that mistake. New stuff instead of extend old stuff.
> > > >
> > > > Some would argue that's not a mistake. How else do we innovate if
> > > > we're just incrementally polishing what's come before?
> > >
> > > I didn't say limit yourself to polishing, I said try and not invalidate
> > > people's knowledge while innovating.
> > >
> > > Too many people go down the path of doing things very differently and
> > > they rationalize that they have to do it that way to innovate. That's
> > > fine but it means it is going to be harder to get people to try your
> > > new stuff.
> > >
> > > The point I'm trying to make is that "different" is a higher barrier,
> > > much, much higher, than "extend". People frequently ignore that and
> > > that means other people ignore their work.
> > >
> > > It is what it is, I doubt I'll convice anyone so I'll drop it.
> >
> > Oh, I don't know. I think it's actually kind of important to see _why_
> > people didn't want to look deeper into plan9 (for example). The system
> > had a lot to offer, but you had to dig a bit to get into it; a lot of
> > folks never got that far. If it was really lack of job control, then
> > that's a shame.
>
> It's certainly not just job control. I think it's a combo of being
> unfamiliar, no source (at first I believe) and Linux was already
> pretty far along.
>
> The lesson is that if there is an installed base, and you want people
> to move, you have to make that easy and there has to be a noticeable
> gain. Plan 9 sounded cool to me but Linux was easy.
> --
> ---
> Larry McVoy Retired to fishing http://www.mcvoy.com/lm/boat
On 1/30/23, Bakul Shah <bakul@iitbombay.org> wrote:
> This wish is perhaps shared by no one else but I'd still love to have a
> system where the kernel has the clean
> architectural lines of plan9 + more good stuff from it, and a Unixy API for
> many existing programs, perhaps as a shared lib. And I don't want all the
> heft of BSD or Linux kernels!
>
> Now this may be quite impractical (like trying to make C as safe as Rust)
> but that is what I want! I just think there is a lot more here that can be
> explored further.
>
I'm writing something sort of like that. Specifically it will be a
QNX-like OS in which literally everything is a file (even all process
state and memory), with pretty much the only real primitives being
variants of read()/write(), and everything else will be built on top
of those (even other file APIs like open()/close()/stat()). The
user-level API will be mostly compatible with conventional Unix, but
will be split into multiple libraries rather than throwing everything
in libc (there will be a base system library implementing the core
file API, and libc and a few other libraries implementing a relatively
standard Unix API will sit on top of this).
On Mon, 30 Jan 2023, Steve Nickolas wrote: [...] > I've actually been bitten by compilers doing unexpectedly stupid things, > so I guard around them. I'll never merge two chars into a short with > > c=(a<<8)|b; I used to do just that to disambiguate commands that started with the same letter in a UT-200 driver that I (re)wrote. Of course, I was young and stupid back then... It's quite something to see things like: switch (cmd) { case 'AA': ... break; case 'AB': ... break; default: wtf(); /* no return */ } Horribly machine-dependent, but I knew the 11/40 and Ed 6 quite well. [...] > I'll never do if (a==b&&c==d), always if ((a==b)&&(c==d)). Indeed; I *always* use parentheses even if they are not necessary (for my benefit, at least). -- Dave