Computer Old Farts Forum
 help / color / mirror / Atom feed
* [COFF] Other OSes?
@ 2018-07-05  5:56 wkt
  2018-07-05  6:29 ` spedraja
                   ` (4 more replies)
  0 siblings, 5 replies; 61+ messages in thread
From: wkt @ 2018-07-05  5:56 UTC (permalink / raw)


OK, I guess I'll be the one to start things going on the COFF list.

What other features, ideas etc. were available in other operating
systems which Unix should have picked up but didn't?

[ Yes, I know, it's on-topic for TUHS but I doubt it will be for long! ]

Cheers, Warren


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Other OSes?
  2018-07-05  5:56 [COFF] Other OSes? wkt
@ 2018-07-05  6:29 ` spedraja
  2018-07-05  6:40 ` bakul
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 61+ messages in thread
From: spedraja @ 2018-07-05  6:29 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 1269 bytes --]

2018-07-05 7:56 GMT+02:00 Warren Toomey <wkt at tuhs.org>:

> OK, I guess I'll be the one to start things going on the COFF list.
>
> What other features, ideas etc. were available in other operating
> systems which Unix should have picked up but didn't?
>
>
​I've been thinking about this many times in the past and I've found ever
the same problem: the framework of reference.

My choice has always been to use the timeline as the first variable; and
second, the characteristics, power and size of the hardware on which UNIX
would run.

But I think that a third variable is needed: other hardware systems with
other operating systems running.

And as a final step, a detailed list of the aspects and features os these
OS's and UNIX selecting the common features of all... or the uncommon.

Cordiales saludos / Kind Regards.

Gracias | Regards - Saludos | Greetings | Freundliche Grüße | Salutations
​
-- 
*Sergio Pedraja*
-----
No crea todo lo que ve, ni crea que está viéndolo todo
-----
"El estado de una Copia de Seguridad es desconocido
hasta que intentas restaurarla" (- nixCraft)


​
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/coff/attachments/20180705/a0203ecc/attachment.html>


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Other OSes?
  2018-07-05  5:56 [COFF] Other OSes? wkt
  2018-07-05  6:29 ` spedraja
@ 2018-07-05  6:40 ` bakul
  2018-07-05 15:23   ` clemc
                     ` (2 more replies)
  2018-07-06  0:55 ` [COFF] " crossd
                   ` (2 subsequent siblings)
  4 siblings, 3 replies; 61+ messages in thread
From: bakul @ 2018-07-05  6:40 UTC (permalink / raw)


On Jul 4, 2018, at 10:56 PM, Warren Toomey <wkt at tuhs.org> wrote:
> 
> OK, I guess I'll be the one to start things going on the COFF list.
> 
> What other features, ideas etc. were available in other operating
> systems which Unix should have picked up but didn't?
> 
> [ Yes, I know, it's on-topic for TUHS but I doubt it will be for long! ]

- Capabilities (a number of OSes implemented them -- See Hank Levy's book:
  https://homes.cs.washington.edu/~levy/capabook/
- Namespaces (plan9)






^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Other OSes?
  2018-07-05  6:40 ` bakul
@ 2018-07-05 15:23   ` clemc
  2018-07-05 20:49     ` scj
  2018-07-05 22:51   ` ewayte
  2018-07-08 20:31   ` perry
  2 siblings, 1 reply; 61+ messages in thread
From: clemc @ 2018-07-05 15:23 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 942 bytes --]

On Thu, Jul 5, 2018 at 2:40 AM, Bakul Shah <bakul at bitblocks.com> wrote:

> On Jul 4, 2018, at 10:56 PM, Warren Toomey <wkt at tuhs.org> wrote:
> >
> > OK, I guess I'll be the one to start things going on the COFF list.
> >
> > What other features, ideas etc. were available in other operating
> > systems which Unix should have picked up but didn't?
> >
> > [ Yes, I know, it's on-topic for TUHS but I doubt it will be for long! ]
>
> - Capabilities (a number of OSes implemented them -- See Hank Levy's book:
>   https://homes.cs.washington.edu/~levy/capabook/
> - Namespaces (plan9)
>
> ​+1 for both​
>
>
>
> _______________________________________________
> COFF mailing list
> COFF at minnie.tuhs.org
> https://minnie.tuhs.org/cgi-bin/mailman/listinfo/coff
>

ᐧ
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/coff/attachments/20180705/bf4f4d85/attachment.html>


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Other OSes?
  2018-07-05 15:23   ` clemc
@ 2018-07-05 20:49     ` scj
  2018-07-05 21:25       ` david
                         ` (4 more replies)
  0 siblings, 5 replies; 61+ messages in thread
From: scj @ 2018-07-05 20:49 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 2363 bytes --]

That's an interesting topic, but it also gets my mind thinking about
UNIX features that were wonderful but didn't evolve as computers did.

My two examples of this are editor scripts and shell scripts.   In
the day, I would write at least one shell script and several editor
scripts a day.  Most of them were 2-4 lines long and used once.  But
they allowed operations to be done on multiple files quite quickly and
safely.

With the advent of glass teletypes, shell scripts simply evaporated --
there was no equivalent.  (yes, there were programs like sed, but it
wasn't the same...).  Changing, e.g., a function name oin 10 files
got a lot more tedious.

With the advent of drag and drop and visual interfaces, shell scripts
evaporated as well.   Once again, doing something on 10 files got
harder than before.   I still use a lot of shell scripts, but mostly
don't write them from scratch any more.

What abstraction mechanisms might we add back to Unix to fill these
gaps?

Steve

----- Original Message -----
From:
 "Clem Cole" <clemc at ccc.com>

To:
"Bakul Shah" <bakul at bitblocks.com>
Cc:
<coff at tuhs.org>
Sent:
Thu, 5 Jul 2018 11:23:04 -0400
Subject:
Re: [COFF] Other OSes?

On Thu, Jul 5, 2018 at 2:40 AM, Bakul Shah <bakul at bitblocks.com [1]>
 wrote:
On Jul 4, 2018, at 10:56 PM, Warren Toomey <wkt at tuhs.org [2]> wrote:
 > 
 > OK, I guess I'll be the one to start things going on the COFF list.
 > 
 > What other features, ideas etc. were available in other operating
 > systems which Unix should have picked up but didn't?
 > 
 > [ Yes, I know, it's on-topic for TUHS but I doubt it will be for
long! ]

- Capabilities (a number of OSes implemented them -- See Hank Levy's
book:
   https://homes.cs.washington.edu/~levy/capabook/ [3]
 - Namespaces (plan9)

​+1 for both​

 _______________________________________________
 COFF mailing list
COFF at minnie.tuhs.org [4]
https://minnie.tuhs.org/cgi-bin/mailman/listinfo/coff [5]

ᐧ

 

Links:
------
[1] mailto:bakul at bitblocks.com
[2] mailto:wkt at tuhs.org
[3] https://homes.cs.washington.edu/~levy/capabook/
[4] mailto:COFF at minnie.tuhs.org
[5] https://minnie.tuhs.org/cgi-bin/mailman/listinfo/coff

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/coff/attachments/20180705/d5fe9c86/attachment-0001.html>


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Other OSes?
  2018-07-05 20:49     ` scj
@ 2018-07-05 21:25       ` david
  2018-07-06 15:42         ` gtaylor
  2018-07-05 22:38       ` ralph
                         ` (3 subsequent siblings)
  4 siblings, 1 reply; 61+ messages in thread
From: david @ 2018-07-05 21:25 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 3558 bytes --]

I find that anything I have to do twice winds up in a shell script, on the assumption that I’ll do it again. So I’ve written a lot of scripts lately, mostly to automate some tools to run our software validation system. These aren’t long (10-15 lines) and I freely give them to others knowing that they will either ignore it (mostly) or use it as is without ever looking at the internals.

I think that editor scripts have died out. ed is almost designed for scripting, and with vi, emacs, and other GUI based editors the concept of scripting the editor has died away. Sad, I do remember writing some really hairy ed scripts.

A quick read of Hank Levy’s book (thanks for the link) brings back (somewhat bad) memories of working on the iAPX 432. I found the chip interesting in concept, and poorly suited for what the company wanted done. Trying to fold the software to fit the hardware didn’t work out. The project was abandonded.

	David

> On Jul 5, 2018, at 1:49 PM, Steve Johnson <scj at yaccman.com> wrote:
> 
> That's an interesting topic, but it also gets my mind thinking about UNIX features that were wonderful but didn't evolve as computers did.
> 
> My two examples of this are editor scripts and shell scripts.   In the day, I would write at least one shell script and several editor scripts a day.  Most of them were 2-4 lines long and used once.  But they allowed operations to be done on multiple files quite quickly and safely.
> 
> With the advent of glass teletypes, shell scripts simply evaporated -- there was no equivalent.  (yes, there were programs like sed, but it wasn't the same...).  Changing, e.g., a function name oin 10 files got a lot more tedious.
> 
> With the advent of drag and drop and visual interfaces, shell scripts evaporated as well.   Once again, doing something on 10 files got harder than before.   I still use a lot of shell scripts, but mostly don't write them from scratch any more.
> 
> What abstraction mechanisms might we add back to Unix to fill these gaps?
> 
> Steve
> 
> 
> 
> ----- Original Message -----
> From: "Clem Cole" <clemc at ccc.com>
> To:"Bakul Shah" <bakul at bitblocks.com>
> Cc:<coff at tuhs.org>
> Sent:Thu, 5 Jul 2018 11:23:04 -0400
> Subject:Re: [COFF] Other OSes?
> 
> 
> 
> 
> On Thu, Jul 5, 2018 at 2:40 AM, Bakul Shah <bakul at bitblocks.com <mailto:bakul at bitblocks.com>> wrote:
> On Jul 4, 2018, at 10:56 PM, Warren Toomey <wkt at tuhs.org <mailto:wkt at tuhs.org>> wrote:
> > 
> > OK, I guess I'll be the one to start things going on the COFF list.
> > 
> > What other features, ideas etc. were available in other operating
> > systems which Unix should have picked up but didn't?
> > 
> > [ Yes, I know, it's on-topic for TUHS but I doubt it will be for long! ]
> 
> - Capabilities (a number of OSes implemented them -- See Hank Levy's book:
>   https://homes.cs.washington.edu/~levy/capabook/ <https://homes.cs.washington.edu/~levy/capabook/>
> - Namespaces (plan9)
> 
> ​+1 for both​
> 
> 
> _______________________________________________
> COFF mailing list
> COFF at minnie.tuhs.org <mailto:COFF at minnie.tuhs.org>
> https://minnie.tuhs.org/cgi-bin/mailman/listinfo/coff <https://minnie.tuhs.org/cgi-bin/mailman/listinfo/coff>
> 
> ᐧ
> _______________________________________________
> COFF mailing list
> COFF at minnie.tuhs.org
> https://minnie.tuhs.org/cgi-bin/mailman/listinfo/coff

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/coff/attachments/20180705/da57f067/attachment.html>


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Other OSes?
  2018-07-05 20:49     ` scj
  2018-07-05 21:25       ` david
@ 2018-07-05 22:38       ` ralph
  2018-07-05 23:11       ` bakul
                         ` (2 subsequent siblings)
  4 siblings, 0 replies; 61+ messages in thread
From: ralph @ 2018-07-05 22:38 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 1405 bytes --]

Hi Steve,

> With the advent of glass teletypes, [editor] scripts simply evaporated
> -- there was no equivalent.  (yes, there were programs like sed, but
> it wasn't the same...).  Changing, e.g., a function name oin 10 files
> got a lot more tedious.

I still write the odd editor script.  I interact with ed every day; it's
handy when the information for the edit is on the TTY from previous
commands and you just want to get in, edit, and w, q.

A shell script and ed script I use a lot is ~/bin/rcsanno, a `blame' for
RCS files that I have dotted about.  RCS because SCCS wasn't available
back before CSSC came along.

I runs through each revision on the path I'm interested in, e.g. 1.1 to
the latest 1.42.  It starts with 1.1, but with `1,1:' prepended to each
line.  For 1.2 onwards it does an rcsdiff(1) with `-e': produce an ed
script.  This is piped into a brief bit of awk that munges the ed script
to prepend the revision to any added or changed line, and then it's fed
into ed.  Thus the `1.2' that's created has all lines match /^1\.[12]:/.
And so on up to 1.42.  It's not the quickest way compared to
interpreting the RCS `,v' file, but it required no insight into that
file format and is `good enough'.

I suspect a more interesting question is what did Unix adopt from other
OSes that it would have been better without!  :-)

-- 
Cheers, Ralph.
https://plus.google.com/+RalphCorderoy


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Other OSes?
  2018-07-05  6:40 ` bakul
  2018-07-05 15:23   ` clemc
@ 2018-07-05 22:51   ` ewayte
  2018-07-08 20:31   ` perry
  2 siblings, 0 replies; 61+ messages in thread
From: ewayte @ 2018-07-05 22:51 UTC (permalink / raw)


That book brought back memories - I wrote a paper for my graduate computer
architecture class on the Intel 432 around 1988.

On Thu, Jul 5, 2018 at 2:46 AM Bakul Shah <bakul at bitblocks.com> wrote:

> On Jul 4, 2018, at 10:56 PM, Warren Toomey <wkt at tuhs.org> wrote:
> >
> > OK, I guess I'll be the one to start things going on the COFF list.
> >
> > What other features, ideas etc. were available in other operating
> > systems which Unix should have picked up but didn't?
> >
> > [ Yes, I know, it's on-topic for TUHS but I doubt it will be for long! ]
>
> - Capabilities (a number of OSes implemented them -- See Hank Levy's book:
>   https://homes.cs.washington.edu/~levy/capabook/
> - Namespaces (plan9)
>
>
>
>
> _______________________________________________
> COFF mailing list
> COFF at minnie.tuhs.org
> https://minnie.tuhs.org/cgi-bin/mailman/listinfo/coff
>


-- 
Eric Wayte
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/coff/attachments/20180705/77104708/attachment.html>


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Other OSes?
  2018-07-05 20:49     ` scj
  2018-07-05 21:25       ` david
  2018-07-05 22:38       ` ralph
@ 2018-07-05 23:11       ` bakul
  2018-07-06  0:06         ` lm
  2018-07-06  0:52       ` tytso
  2018-07-06 15:38       ` gtaylor
  4 siblings, 1 reply; 61+ messages in thread
From: bakul @ 2018-07-05 23:11 UTC (permalink / raw)


On Thu, 05 Jul 2018 13:49:58 -0700 "Steve Johnson" <scj at yaccman.com> wrote:
> That's an interesting topic, but it also gets my mind thinking about UNIX
> features that were wonderful but didn't evolve as computers did.
> 
> My two examples of this are editor scripts and shell scripts. In the day, I
> would write at least one shell script and several editor scripts a day.  Most
> of them were 2-4 lines long and used once.  But they allowed operations to be
> done on multiple files quite quickly and safely.
> 
> With the advent of glass teletypes, shell scripts simply evaporated -- there
> was no equivalent.  (yes, there were programs like sed, but it wasn't the
> same...).  Changing, e.g., a function name oin 10 files got a lot more tedious.
> 
> With the advent of drag and drop and visual interfaces, shell scripts
> evaporated as well.  Once again, doing something on 10 files got harder than
> before.  I still use a lot of shell scripts, but mostly don't write them from
> scratch any more.

With specialized apps there is less need for the kind of
things we used to do.  While some of us want lego technic,
most people simply want preconstructed toys to play with.

> What abstraction mechanisms might we add back to Unix to fill these gaps?

One way to interpret your question is can we build
*composable* GUI elements? In specialized application there
are GUI based systems that work. e.g. schematic capture,
layout, control system design, sound etc. (Though for circuit
design HDL is probably far more popular now. Schematic entry
is only for board level design.)

Even if you designed GUI elements for filters etc. where you
can drag and drop files on an input port and attach output to
some other input port, the challenge is in how to make it easy
to use. Normally this would be unwieldy to use. Unless
you used virtaul "NFC" -- where bringing two processing blocks
near each other automatically connected their input/output
ports. Or you can open a "procesing" window where you can type
in your script but it is run until its output is connected
somewhere. Or an "selection" window where copies of file/dir
can be dragged and dropped.

This may actually work well for distributed sytems, which are
basically dataflow machines. [Note that control systems are
essentially dataflow systems.] This would actually complement
something I have been thinking of for constructing/managing
dist.systems.

Note that there are languages like Scratch & Blockly for kids
for programming but to my knowledge nothing at the macro
level, where the building blocks are files/dirs/machines, tcp
connections and programs).


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Other OSes?
  2018-07-05 23:11       ` bakul
@ 2018-07-06  0:06         ` lm
  2018-07-06 15:49           ` gtaylor
  0 siblings, 1 reply; 61+ messages in thread
From: lm @ 2018-07-06  0:06 UTC (permalink / raw)


On Thu, Jul 05, 2018 at 04:11:57PM -0700, Bakul Shah wrote:
> On Thu, 05 Jul 2018 13:49:58 -0700 "Steve Johnson" <scj at yaccman.com> wrote:
> > That's an interesting topic, but it also gets my mind thinking about UNIX
> > features that were wonderful but didn't evolve as computers did.
> > 
> > My two examples of this are editor scripts and shell scripts. In the day, I
> > would write at least one shell script and several editor scripts a day.  Most
> > of them were 2-4 lines long and used once.  But they allowed operations to be
> > done on multiple files quite quickly and safely.
> > 
> > With the advent of glass teletypes, shell scripts simply evaporated -- there
> > was no equivalent.  (yes, there were programs like sed, but it wasn't the
> > same...).  Changing, e.g., a function name oin 10 files got a lot more tedious.
> > 
> > With the advent of drag and drop and visual interfaces, shell scripts
> > evaporated as well.  Once again, doing something on 10 files got harder than
> > before.  I still use a lot of shell scripts, but mostly don't write them from
> > scratch any more.
> 
> With specialized apps there is less need for the kind of
> things we used to do.  While some of us want lego technic,
> most people simply want preconstructed toys to play with.

Years and years ago, decades ago, I worked on a time series picker that
had a pretty cool interface.  Yeah, it was a GUI tool with all the menus,
etc, but it also had a console prompt because all the menus had keyboard
shortcuts.  What was neat about it was that as you pulled down menus and
did stuff, which was a process where you'd go through several things to
get what you want, the console would fill in with the shortcuts.

So if you hadn't used it for a while, using it basically taught you the
shortcuts.  It was pretty slick, I wish all guis worked like that.


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Other OSes?
  2018-07-05 20:49     ` scj
                         ` (2 preceding siblings ...)
  2018-07-05 23:11       ` bakul
@ 2018-07-06  0:52       ` tytso
  2018-07-06  5:59         ` ralph
  2018-07-06 15:57         ` gtaylor
  2018-07-06 15:38       ` gtaylor
  4 siblings, 2 replies; 61+ messages in thread
From: tytso @ 2018-07-06  0:52 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 4541 bytes --]

On Thu, Jul 05, 2018 at 01:49:58PM -0700, Steve Johnson wrote:
> My two examples of this are editor scripts and shell scripts.   In
> the day, I would write at least one shell script and several editor
> scripts a day.  Most of them were 2-4 lines long and used once.  But
> they allowed operations to be done on multiple files quite quickly and
> safely.

Before making such generalizations, I think it's wise to state more
clearly what you have in mind when you say "shell script".  I use
shell scripts all the time, but they tend to be for simple things that
have control statements, and thus are not 2-4 lines long, and they
aren't use-just once sort of things.

For example, my "do-suspend script":

#!/bin/bash

if [[ $EUID -ne 0 ]]; then
   exec sudo /usr/local/bin/do-suspend
fi

/usr/bin/xflock4
sleep 0.5
echo deep > /sys/power/mem_sleep 
exec systemctl suspend

I use this many times a day, and shell script was the right choice, as
it was simple to write, simple to debug, and it would have been more
effort to implement it in almost any other language.  (Note that I
needed to use /bin/bash instead of a PODIX /bin/sh because I needed
access to the effective uid via $EUID.)

Your definition of "shell scripts" and "editor scripts" that have
disappear seem to be one-off things.  For me, those have been replaced
by command-line history and command-line editing in the shell, and as
far as one-off editor scripts, I use emacs's keyboard macro facility,
which is faster (for me) to set up than editor scripts.

> With the advent of glass teletypes, shell scripts simply evaporated --
> there was no equivalent.  (yes, there were programs like sed, but it
> wasn't the same...).  Changing, e.g., a function name in 10 files
> got a lot more tedious.

So here's the thing, with emacs's keyboard macros, it's actually not
tedious at all.  That's why I use them instead of editor scripts!

Granted, somewhere starting around 10 files I'll probably end up
breaking out some emacs-lisp for forther automation, but for a small
number of files, using file-name completionm and then using a handful
of control characters per file to kick off the keyboard macros a few
hundred times (C-u C-u C-u C-u C-u C-x C-e) ends up being faster to
type than consing up a one-off shell or editor script.  (Because,
basically, an emacs keyboard macro really *is* an editor script!)

> What abstraction mechanisms might we add back to Unix to fill these
> gaps?

What do you mean by "Unix"?  Does using /bin/bash for my shell scripts
count as Unix?  Or only if I strict myself to a strict Unix V9 or
POSIX subset?  Does using emacs count as "Unix"?  What about perl?  Or
Python?

Personally I all consider this "Unix", or at least "Linux", and they
are are additoinal tools which, if learned well, are far more powerful
than the traditional Unix tools.  And the nice thing about Linux
largely dominating the "Unix-like" world is for the most part, I don't
have to worry about backwards compatibility issues, at leasts for my
personal usage.

For scripts that I expect other people to use, I still have the reflex
of doing things as portability as possible.  So for example, consider
if you will, mk_cmds and its helper scripts, which is part of a port
of Multics's SubSystem facility originated with Kerberos V5, and is
now also part of ext2/3/4 userspace utilities:

https://git.kernel.org/pub/scm/fs/ext2/e2fsprogs.git/tree/lib/ss/mk_cmds.sh.in
https://git.kernel.org/pub/scm/fs/ext2/e2fsprogs.git/tree/lib/ss/ct_c.sed
https://git.kernel.org/pub/scm/fs/ext2/e2fsprogs.git/tree/lib/ss/ct_c.awk

My implementation of mk_cmds was designed for ultra-portability (as
in, it would behave identically on Mac's AUX, IBM's AIX, Solaris (with
either a SunSoft or GNU toolchain), OSF/1, Linux, etc.).  To do this,
the above scripts use a strict subset of POSIX defined syntax and
behaviours.  And it's a great use of shell, sed, and awk scripts,
precisely because they were available everywhere.

The original version of mk_cmds was implemented in C, Yacc, and Lex,
and the problem was that (a) this got complicated when cross
compiling, where the Host != Target architecture, and (b) believe it
or not, Yacc and Lex are not all that standardized across different
Unix/Linux systems.  (Between AT&T's Yacc, Berkeley Yacc, and GNU
Bison, portability was a massive and painful headache.)

So I solved both problems by taking what previously had been done in
C, Yacc, and Lex, and using shell, sed, and awk isntead.  :-)

   	     	      	    	   	    - Ted


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Other OSes?
  2018-07-05  5:56 [COFF] Other OSes? wkt
  2018-07-05  6:29 ` spedraja
  2018-07-05  6:40 ` bakul
@ 2018-07-06  0:55 ` crossd
  2018-07-06  5:42   ` bakul
  2018-07-06  4:04 ` grog
  2018-07-08 20:50 ` [COFF] Other OSes? perry
  4 siblings, 1 reply; 61+ messages in thread
From: crossd @ 2018-07-06  0:55 UTC (permalink / raw)


On Thu, Jul 5, 2018 at 1:56 AM Warren Toomey <wkt at tuhs.org> wrote:

> OK, I guess I'll be the one to start things going on the COFF list.
>
> What other features, ideas etc. were available in other operating
> systems which Unix should have picked up but didn't?
>
> [ Yes, I know, it's on-topic for TUHS but I doubt it will be for long! ]
>

Ooo, neato. I think about this a lot, because my day job is writing OS
kernels.

I think it's kind of paradoxical, but Unix went off the rails in not
embracing the filesystem enough, and simultaneously embracing it too much.

Separate namespaces for things like sockets add unnecessary complexity
since you need separate system calls to create these objects because you
can't do it through the filesystem. "Everything is a file" ended up being
too weak for a lot of interesting applications. Plan 9 fixed that by saying
instead, "everything is a filesystem and we have per-process (group)
composable namespaces." So now your window system is inherently
network-capable and recursive with no special effort.

That was great, but it turns out that the simplistic nature of Unix (and
plan9) files doesn't fit some applications particularly well: doing
scatter/gather is sort of a pain on plan9; fixed on Unix, but with an ugly
interface. Modeling objects that are block oriented or transactional on
Unix is still something of a hack, and similarly on plan9. Important
classes of applications, like relational databases, don't map particularly
well to the filesystem (yes, yes, RDBMS servers usually store their data in
files, but they end up imposing a substantial layer on top of the
filesystem for transactions, indices, etc. In the end, they often end up
treating large files like block IO devices, or going straight to the
storage layer and working against a raw device).

Further, text as a "universal" medium means that every processing stage in
a pipeline has to do substantial parsing and validation of input, and
semantic information passed between stages is done using an external
representation as a side-channel. Of course, often we don't bother with
this because we "know" what the data is and we don't need to (I submit that
most pipelines are one-offs) and where pipelines are used more formally in
large programs, often the data passed is just binary.

I had a talk with some plan9 folks (BTL alumni) about this about a year ago
in the context of a project at work. I wanted to know why we didn't
architect systems like Unix/plan9 anymore: nowadays, everything's hidden
behind captive (web) interfaces that aren't particularly general, let alone
composable in ad hoc ways. Data stores are highly structured and totally
inaccessible. Why? It seemed like a step backwards. The explanation I got
was that Unix and plan9 were designed to solve particular problems during
specific times; those problems having been solved at those times, things
changed and the same problems aren't as important anymore: ergo, the same
style of solution wouldn't be appropriate in this brave new world. I'm sure
we all have our examples of how file-based problems *are* important, but
the rest of the world has moved on, it seems and let's face it: if you're
on this list, you're likely rather the exception than the rule.

Incidentally, I think one might be able to base a capability system on
plan9's namespaces, but it seems like everyone has a different definition
of what a capability is. For example, do I have a capability granting me
the right to open TCP connections, potentially to anywhere, or do I have a
capability that allows me to open a TCP connection to a specific host/port
pair? More than one connection? How about a subset of ports or a range or
set of IP addresses? It's unclear to me how that's *actually* modeled and
it seems to me that you need to interpose some sort of local policy into
the process somehow, but that that policy exists outside of the
capabilities framework itself.

A few more specific things I think would be cool to see in a beyond-Unix OS:

1. Multics-style multi-level security within a process. Systems like CHERI
are headed in that direction and Dune and gVisor give many of the benefits,
but I've wondered if one could leverage hardware-assisted nested
virtualization to get something analogous to Multics-style rings. I imagine
it would be slow....
2. Is mmap() *really* the best we can do for mapping arbitrary resources
into an address space?
3. A more generalized message passing system would be cool. Something where
you could send a message with a payload somewhere in a synchronous way
would be nice (perhaps analogous to channels). VMS-style mailboxes would
have been neat.

        - Dan C.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/coff/attachments/20180705/0733af85/attachment.html>


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Other OSes?
  2018-07-05  5:56 [COFF] Other OSes? wkt
                   ` (2 preceding siblings ...)
  2018-07-06  0:55 ` [COFF] " crossd
@ 2018-07-06  4:04 ` grog
  2018-07-06 16:10   ` gtaylor
  2018-07-08 20:50 ` [COFF] Other OSes? perry
  4 siblings, 1 reply; 61+ messages in thread
From: grog @ 2018-07-06  4:04 UTC (permalink / raw)


On Thursday,  5 July 2018 at 15:56:50 +1000, Warren Toomey wrote:
> OK, I guess I'll be the one to start things going on the COFF list.
>
> What other features, ideas etc. were available in other operating
> systems which Unix should have picked up but didn't?

I came to Unix from Tandem's Guardian OS, which had clearly based on
some exposure to Unix, though in many cases it was much more primitive
(no hierarchical file system, for example).  But it did have one area
where I found it significantly superior: interprocess communication.

The approach is considerably different from Unix.  The system is a
loosely coupled multiprocessor system which communicates by message
passing, and everything, including file operations, goes via the
message system.  That makes IPC really the basis of system
functionality, and at a user process level messages from other process
are read from a special file ($RECEIVE, fd 0 when it's open).

It's really difficult to compare with Unix.  I've tried several times
over the years, and I still haven't come to any conclusion.  One
problem is that Tandem's userland tools (shell and friends) are far
inferior to Unix.  And when we tried to shoehorn Unix onto Guardian,
we ran into all sorts of conceptual issues.

I wrote an overview of Guardian that was published as chapter 8 of
Spinellis and Guisios "Beautiful Architecture" (O'Reilly, 2009).  If
you don't have access, my final draft is at
http://www.lemis.com/grog/Books/t16arch.pdf.

Comments welcome.

Greg
--
Sent from my desktop computer.
Finger grog at lemis.com for PGP public key.
See complete headers for address and phone numbers.
This message is digitally signed.  If your Microsoft mail program
reports problems, please read http://lemis.com/broken-MUA
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 163 bytes
Desc: not available
URL: <http://minnie.tuhs.org/pipermail/coff/attachments/20180706/a1517545/attachment.sig>


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Other OSes?
  2018-07-06  0:55 ` [COFF] " crossd
@ 2018-07-06  5:42   ` bakul
  2018-07-09  2:51     ` crossd
  0 siblings, 1 reply; 61+ messages in thread
From: bakul @ 2018-07-06  5:42 UTC (permalink / raw)


On Thu, 05 Jul 2018 20:55:38 -0400 Dan Cross <crossd at gmail.com> wrote:
> A few more specific things I think would be cool to see in a beyond-Unix OS:
>
> 1. Multics-style multi-level security within a process. Systems like CHERI
> are headed in that direction and Dune and gVisor give many of the benefits,
> but I've wondered if one could leverage hardware-assisted nested
> virtualization to get something analogous to Multics-style rings. I imagine
> it would be slow....

In traditional machines a protection domain is tightly
coupled with a virtual address space. Code in one
address space can not touch anything in another address
space (unless the same VM object is mapped in both). Except
for shared memory mapping, any other communication must be
mediated by a kernel. [x86 has call gates but they are not
used much if at all]

In the Mill arch. a protection domain is decoupled from
virtual address space. That is, code in one domain can not
directly touch anything in another domain but can call
functions in another domain, provided it has the right sort of
access rights. Memory can be mapped into multiple domains so
once mapped, access becomes cheap.  This also means everything
can be in the same virtual address space.

In traditional systems there is a mode switch when a process
makes a supervisor call but this is dangerous (access to
everything in kernel mode so people want nested domains).

In Mill a thread can traverse through multiple protection
domains -- sort of like in the Alpha Real Time Kernel where a
thread can traverse through a number of nodes[1] -- and each
node in effect is its own protection domain. This means
instead of a syscall you can make a shared librar call
directly to service running in anothter domain and what this
function can access from your domain is very tighly
constrained. The need for a privileged kernel completely
disappears!

Mill ideas are very much worth exploring.  It will be possible
to build highly secure systems with it -- if it ever gets
sufficiently funded and built!  IMHO layers of mapping as with
virtualization/containerization are not really needed for
better security or isolation.

> 2. Is mmap() *really* the best we can do for mapping arbitrary resources
> into an address space?

I think this is fine.  Even remote objects mmapping should
work!

> 3. A more generalized message passing system would be cool. Something where
> you could send a message with a payload somewhere in a synchronous way
> would be nice (perhaps analogous to channels). VMS-style mailboxes would
> have been neat.

Erlang. Carl Hewitt's Actor model has this.

[1] http://tierra.aslab.upm.es/~sanz/cursos/DRTS/AlphaRtDistributedKernel.pdf


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Other OSes?
  2018-07-06  0:52       ` tytso
@ 2018-07-06  5:59         ` ralph
  2018-07-06 15:59           ` gtaylor
  2018-07-06 15:57         ` gtaylor
  1 sibling, 1 reply; 61+ messages in thread
From: ralph @ 2018-07-06  5:59 UTC (permalink / raw)


Hi Ted,

>     if [[ $EUID -ne 0 ]]; then
...
> (Note that I needed to use /bin/bash instead of a PODIX /bin/sh
> because I needed access to the effective uid via $EUID.)

id(1) `-u' option?

    $ ls -l id
    -rwsr-xr-x 1 31415 31415 38920 Jul  6 06:51 id
    $ ./id
    uid=1000(ralph) gid=1000(ralph) euid=31415 groups=1000(ralph),10(wheel),14(uucp),100(users)
    $ ./id -u
    31415
    $ ./id -ru
    1000
    $ 

-- 
Cheers, Ralph.
https://plus.google.com/+RalphCorderoy


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Other OSes?
  2018-07-05 20:49     ` scj
                         ` (3 preceding siblings ...)
  2018-07-06  0:52       ` tytso
@ 2018-07-06 15:38       ` gtaylor
  2018-07-09  1:56         ` tytso
  4 siblings, 1 reply; 61+ messages in thread
From: gtaylor @ 2018-07-06 15:38 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 3847 bytes --]

On 07/05/2018 02:49 PM, Steve Johnson wrote:
> My two examples of this are editor scripts and shell scripts.

Will you please elaborate on what you mean by "editor scripts"?  That's 
a term that I'm not familiar with.

I feel like you're talking about automating an editor in one way or 
another.  Redirecting standard input into ed (ex, etc) comes to mind.

As does vi(m)'s ex command mode.  Regular expressions, macros, and the 
likes start waiving their arms like an anxious student in the back of 
the class.

There's also Vim's vimscript language that is black magic to me.

> In the day, I would write at least one shell script and several editor 
> scripts a day.  Most of them were 2-4 lines long and used once.  But they 
> allowed operations to be done on multiple files quite quickly and safely.

I too have always written lots of shell scripts.  Granted, most of the 
one off shell scripts are long / nested command lines and not actually 
script files.  At least not until I need to do something for the 2nd or 
3rd time.

I use the following single line script daily if not hourly.

/usr/bin/xsel -ob | /bin/sed 's/^\s*>\{1,\}\s\{1,\}//;s/^\s*//' | 
/usr/bin/fmt | /bin/sed ':a;N;$!ba;s/\n/ \n/g;s/ \n \n/\n\n/g;s/\n 
\n/\n\n/g;s/ $//' | /usr/bin/xsel -ib

It is derived from a very close variant that works without the leading 
/^>+\s/

> With the advent of glass teletypes, shell scripts simply evaporated -- 
> there was no equivalent.  (yes, there were programs like sed, but it 
> wasn't the same...).  Changing, e.g., a function name oin 10 files got a 
> lot more tedious.

I don't understand that at all.  How did glass ttys (screens) change 
what people do / did on unix?

Granted, there is more output history with a print terminal.  There are 
times that I'd like more of that, particularly when my terminal's 
history buffer isn't deep enough for one reason or another.  (I really 
should raise that higher than 10k lines.)

What could / would you do at a shell prompt pre-glass-TTYs that you 
can't do the same now with glass-TTYs?

I must need more caffeine as I'm not understanding the difference.

> With the advent of drag and drop and visual interfaces, shell scripts 
> evaporated as well.

Why do you say that?  If anything, GUIs caused me to use more shell 
scripts.  Rather, things that I used to do as a quick one off are now 
saved in a shell script that I'll call with the path to the file(s) that 
I want to act on.

On Windows I'd write batch files that worked by accepting file(s) being 
drug and dropped onto them.  That way I could select files, drag them to 
a short cut and have the script run on the files that I had visually 
selected.  (I've not felt the need to do similar in unix.)

> Once again, doing something on 10 files got harder than before.

Why did it get harder?  Could you no longer still use the older shell 
based method?

Or are you saying that there was no good GUI counterpart for what you 
used to do in shell?

> I still use a lot of shell scripts, but mostly don't write them from 
> scratch any more.

I too use a lot of shell scripts.  Many of them evolve from ad-hock 
command lines that have grown more complex or has been needed for the 
3rd time.

> What abstraction mechanisms might we add back to Unix to fill these gaps?

I don't know what I'd add back as I feel like they are still there.

I also think that the CLI is EXTREMELY dynamic and EXTREMELY flexible. 
I have zero idea how to provide a point and click GUI interface that is 
as dynamic or flexible.



-- 
Grant. . . .
unix || die

-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 3982 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://minnie.tuhs.org/pipermail/coff/attachments/20180706/4508fa07/attachment.bin>


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Other OSes?
  2018-07-05 21:25       ` david
@ 2018-07-06 15:42         ` gtaylor
  0 siblings, 0 replies; 61+ messages in thread
From: gtaylor @ 2018-07-06 15:42 UTC (permalink / raw)


On 07/05/2018 03:25 PM, David wrote:
> ed is almost designed for scripting, and with vi, emacs, and other GUI 
> based editors the concept of scripting the editor has died away.

I don't think I would have concluded that ed was designed for scripting 
per say.  I do think it's very conducive to it.  I also think that ex 
(vi(m)'s command mode is similar.

Vim also has an extensive vimscript language that I see a LOT of very 
fancy things in.

I also wonder if Visual Basic Scripts (.vbs) (macros) inside of the 
various Microsoft Office GUI apps might qualify here.

I think that scripts are less often used in editors than they once were.



-- 
Grant. . . .
unix || die

-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 3982 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://minnie.tuhs.org/pipermail/coff/attachments/20180706/2862a155/attachment-0001.bin>


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Other OSes?
  2018-07-06  0:06         ` lm
@ 2018-07-06 15:49           ` gtaylor
  0 siblings, 0 replies; 61+ messages in thread
From: gtaylor @ 2018-07-06 15:49 UTC (permalink / raw)


On 07/05/2018 06:06 PM, Larry McVoy wrote:
> So if you hadn't used it for a while, using it basically taught you 
> the shortcuts.  It was pretty slick, I wish all guis worked like that.

"smit(ty)" from IBM's AIX comes to mind.  It provides a nice curses 
based menu interface / form to fill in information /and/ shows the 
underlying OS command(s) that will be executed.

I have always thought this was fairly unique (at least I'm ignorant of 
anything else like it) and quite helpful.



-- 
Grant. . . .
unix || die

-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 3982 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://minnie.tuhs.org/pipermail/coff/attachments/20180706/bf829fd6/attachment.bin>


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Other OSes?
  2018-07-06  0:52       ` tytso
  2018-07-06  5:59         ` ralph
@ 2018-07-06 15:57         ` gtaylor
  1 sibling, 0 replies; 61+ messages in thread
From: gtaylor @ 2018-07-06 15:57 UTC (permalink / raw)


On 07/05/2018 06:52 PM, Theodore Y. Ts'o wrote:
> Granted, somewhere starting around 10 files I'll probably end up breaking 
> out some emacs-lisp for forther automation, but for a small number of 
> files, using file-name completionm and then using a handful of control 
> characters per file to kick off the keyboard macros a few hundred times 
> (C-u C-u C-u C-u C-u C-x C-e) ends up being faster to type than consing 
> up a one-off shell or editor script.  (Because, basically, an emacs 
> keyboard macro really*is*  an editor script!)

I've not needed to do something like this very often.  The few times 
that I've needed to do it have usually involved (shell) scripts, likely 
calling sed and / or awk, possibly with their associated files.  (Much 
like it looks like was done for mk_cmds.)

I would also consider using Vim's bufdo command across multiple buffers 
(""open files).  Depending on the complexity, I'd likely define a macro 
(if not an all out Vimscript), and execute said macro / Vimscript via 
:buffdo.

I had assumed that emacs had similar functionality.



-- 
Grant. . . .
unix || die

-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 3982 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://minnie.tuhs.org/pipermail/coff/attachments/20180706/00bba462/attachment.bin>


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Other OSes?
  2018-07-06  5:59         ` ralph
@ 2018-07-06 15:59           ` gtaylor
  2018-07-06 16:10             ` ralph
  0 siblings, 1 reply; 61+ messages in thread
From: gtaylor @ 2018-07-06 15:59 UTC (permalink / raw)


On 07/05/2018 11:59 PM, Ralph Corderoy wrote:
> id(1) `-u' option?

Ah, "(constructive) criticism".

The joys of sharing what one does with a community and getting all the 
alternative options / "improvements" / suggestion responses.

:-)

I think this is a wonderful way to learn.  :-D



-- 
Grant. . . .
unix || die

-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 3982 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://minnie.tuhs.org/pipermail/coff/attachments/20180706/53cfc41f/attachment.bin>


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Other OSes?
  2018-07-06 15:59           ` gtaylor
@ 2018-07-06 16:10             ` ralph
  2018-07-06 16:47               ` gtaylor
  0 siblings, 1 reply; 61+ messages in thread
From: ralph @ 2018-07-06 16:10 UTC (permalink / raw)


Hi Grant,

> > id(1) `-u' option?
...
> I think this is a wonderful way to learn.  :-D

Like Ted, I favour sh over bash for scripts when possible,
and was just chipping in to help increase sh's number.  :-)

    $ strace -c dash -c : |& sed -n '1p;$p'
    % time     seconds  usecs/call     calls    errors syscall
    100.00    0.000883                    35         1 total
    $ strace -c bash -c : |& sed -n '1p;$p'
    % time     seconds  usecs/call     calls    errors syscall
    100.00    0.002943                   145         8 total
    $

-- 
Cheers, Ralph.
https://plus.google.com/+RalphCorderoy


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Other OSes?
  2018-07-06  4:04 ` grog
@ 2018-07-06 16:10   ` gtaylor
  2018-07-06 18:27     ` [COFF] Editor Scripts scj
  0 siblings, 1 reply; 61+ messages in thread
From: gtaylor @ 2018-07-06 16:10 UTC (permalink / raw)


On 07/05/2018 10:04 PM, Greg 'groggy' Lehey wrote:
> It's really difficult to compare with Unix.  I've tried several times 
> over the years, and I still haven't come to any conclusion.

The impression that I got while reading your description made me think 
of distributed systems that use message bus(es) to communicate between 
applications on different distributed systems.  Or at least the same 
distributed IPC idea, just between parts of a single system, no (what is 
typically TCP/IP over Ethernet) network between them.

Does that even come close?  Or have I completely missed the boat?



-- 
Grant. . . .
unix || die

-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 3982 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://minnie.tuhs.org/pipermail/coff/attachments/20180706/67bd1644/attachment-0001.bin>


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Other OSes?
  2018-07-06 16:10             ` ralph
@ 2018-07-06 16:47               ` gtaylor
  0 siblings, 0 replies; 61+ messages in thread
From: gtaylor @ 2018-07-06 16:47 UTC (permalink / raw)


On 07/06/2018 10:10 AM, Ralph Corderoy wrote:
> Hi Grant,

Hi Ralph,

> Like Ted, I favour sh over bash for scripts when possible, and was just 
> chipping in to help increase sh's number.  :-)

To each his / her own preferences.

I do like the feedback and exposure to alternative solutions.

>      $ strace -c dash -c : |& sed -n '1p;$p'
>      % time     seconds  usecs/call     calls    errors syscall
>      100.00    0.000883                    35         1 total
>      $ strace -c bash -c : |& sed -n '1p;$p'
>      % time     seconds  usecs/call     calls    errors syscall
>      100.00    0.002943                   145         8 total

I like the info.

I'm not at all surprised that dash has fewer calls than bash.

$ strace -c zsh -c : |& sed -n '1p;$p'
% time     seconds  usecs/call     calls    errors syscall
100.00    0.000017                   271        15 total

Oh my.  Zsh is even fatter (call heavy) than Bash.



-- 
Grant. . . .
unix || die

-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 3982 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://minnie.tuhs.org/pipermail/coff/attachments/20180706/15e5219c/attachment.bin>


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Editor Scripts
  2018-07-06 16:10   ` gtaylor
@ 2018-07-06 18:27     ` scj
  2018-07-06 19:04       ` gtaylor
  0 siblings, 1 reply; 61+ messages in thread
From: scj @ 2018-07-06 18:27 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 3102 bytes --]

What I used editor scripts for before utilities like sed came along
was primarily in what would now be called refactoring.
A common pattern for me was fo have a function foo that  took two
arguments and I wanted to add another argument.
Recall in those days that the arguments to a function had no type
information   foo( x, y ) would be followed by int x in the body of
the function.
Also, htere were no function prototypes.   So a very common error
(and a major motivation for writing Lint) was finding functions that
were
called inconsistently across the program.  

If I wanted to add an argument to foo, the first thing I would do is
run an editor script, something like

    1,$s/foo/old_foo/g
     w
     q

and apply it to all my source and header files.   Then I'd verify
that the resulting files compiled and ran.

Then I would take the definition of foo (now called old_foo), and
change it to its new form, with the
extra arguments, and rename it to foo.   Then I would grep my source
files for old_foo and change the
calls as needed.   Compiling would find any that I missed, so I
could fix them.

Sounds like a lot of work, but if you did it nearly every day it went
smoothly.

ed lent itself to scripts.  It was all ascii and line oriented.  
With glass teletypes came editors like vi.  Suddenly, a major
fraction of the letters typed were involved in moving a cursor
alone.  Also, vi wasn't as good at doing regular expressions as ed,
especially when lines were being joined   (this was later fixed). 
So editor scripts went from daily  usage to something that felt
increasingly alien.   The fact that you ccould see the code being
changed was good, but find the lines to change across a couple of
dozen files was much more time consuming...

Steve

PS:  There are IDEs that make quickly finding the definitions of a
function from its uses, or vice versa, much easier now.  But I think
it falls short of being an abstraction mechanism the way editor
scripts were...  In particular, you can't put such mouse clicks into
a file and run them on a bunch of tiles...

----- Original Message -----
From: "Grant Taylor" <gtaylor at tnetconsulting.net>
To:<coff at minnie.tuhs.org>
Cc:
Sent:Fri, 6 Jul 2018 10:10:24 -0600
Subject:Re: [COFF] Other OSes?

 On 07/05/2018 10:04 PM, Greg 'groggy' Lehey wrote:
 > It's really difficult to compare with Unix. I've tried several
times 
 > over the years, and I still haven't come to any conclusion.

 The impression that I got while reading your description made me
think 
 of distributed systems that use message bus(es) to communicate
between 
 applications on different distributed systems. Or at least the same 
 distributed IPC idea, just between parts of a single system, no (what
is 
 typically TCP/IP over Ethernet) network between them.

 Does that even come close? Or have I completely missed the boat?

 -- 
 Grant. . . .
 unix || die


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/coff/attachments/20180706/36f8fdf7/attachment.html>


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Editor Scripts
  2018-07-06 18:27     ` [COFF] Editor Scripts scj
@ 2018-07-06 19:04       ` gtaylor
  0 siblings, 0 replies; 61+ messages in thread
From: gtaylor @ 2018-07-06 19:04 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 6698 bytes --]

On 07/06/2018 12:27 PM, Steve Johnson wrote:
> What I used editor scripts for before utilities like sed came along was 
> primarily in what would now be called refactoring.

Okay.

I think you just touched on at least one thing that predates my unix 
experience.

sed has always existed for me.  (I've only been doing unix things for 
about 20 years.)

I would think that what I do in sed could (still) fairly easily be done 
by scripting ed.

As I type this, I realize that it might be possible (period, not just 
easier) to do something with ed scripts than with sed.  (See the "Pass 
variable into sed program" thread from early May this year in the 
alt.comp.lang.shell.unix.bourne-bash newsgroup.)

> A common pattern for me was fo have a function foo that  took two 
> arguments and I wanted to add another argument.

Okay.

> Recall in those days that the arguments to a function had no type 
> information   foo( x, y ) would be followed by int x in the body of the 
> function.

I'm ignorant of most of that, both from timing and the fact that I do 
exceptionally little programming.

> Also, htere were no function prototypes.   So a very common error (and a 
> major motivation for writing Lint) was finding functions that were
> called inconsistently across the program.

Okay.  I can see how that would be problematic and give rise for tools 
to help avoid said problem.

> If I wanted to add an argument to foo, the first thing I would do is run 
> an editor script, something like
> 
>      1,$s/foo/old_foo/g
>       w
>       q
> 
> and apply it to all my source and header files.

I would think that the exact commands could be run via command mode 
(thus bufdo) or as a script redirected into ex.

I would also seriously consider sed (which you say didn't exist at the 
time) across the proper files (with shell globing for selection).

> Then I'd verify that the resulting files compiled and ran.

Fair.  That makes perfect sense.  I've done similar myself.  (All be it 
rarely.)

> Then I would take the definition of foo (now called old_foo), and change 
> it to its new form, with the extra arguments, and rename it to foo.

I assume you'd use a similar editor script with a different regular 
expression and / or set of actions.

As I type this I can't think of the exact actions I'd use.  I'd probably 
search for the declaration and substitute the old declaration with the 
new declaration, go down the known number of lines, add the next int z 
line.  Then I'd probably substitute the old invocation with the new 
invocation across all lines (1,$ or % in vi).

I think I'd need a corpus of data to test against / refine.

> Then I would grep my source files for old_foo and change the calls 
> as needed.   Compiling would find any that I missed, so I could fix them.

*nod*

> Sounds like a lot of work, but if you did it nearly every day it went 
> smoothly.

I don't think it is a lot of work.  I think it sounds like a 
codification of the process that I would go through mentally.

I've done similar with a lot of other things (primarily config files), 
sometimes across multiple systems via remote ssh commands.

I think such methodology works quite well.  It does take a mindset to do 
it.  But that's part of the learning curve.

> ed lent itself to scripts.  It was all ascii and line oriented.

Fair.

> With glass teletypes came editors like vi.  Suddenly, a major fraction 
> of the letters typed were involved in moving a cursor alone.

Hum.  I hadn't thought about that.  I personally haven't done much in 
ed, so I never had the comparison.  But it does make sense.

I do have to ask, why does the evolution of vi / vim / emacs / etc 
preclude you from continuing to use ed the way that you historically 
used it?

I follow a number of people on Twitter that still prefer ed.  I suspect 
for some of the reasons that you are mentioning.

> Also, vi wasn't as good at doing regular expressions as ed, especially 
> when lines were being joined   (this was later fixed).

That's all new news to me.  I've been using vim my entire unix career. 
I was never exposed to the issues that you mention.

> So editor scripts went from daily  usage to something that felt 
> increasingly alien.

I guess they don't seem alien to me because I use commands in vim's 
command mode weekly (if not daily) that seem to be the exact same thing. 
  (Adjusting for vim's syntax vs other similar utilities.)

So putting the commands in a file vs typing them on the ex command line 
makes little difference to me.  I'm just doing them interactively.  I 
also feel confident that I could move the commands to their own file 
that is sourced if I wanted to.  Thus I feel like they are still here 
with us.

> The fact that you ccould see the code being changed was good, but find 
> the lines to change across a couple of dozen files was much more time 
> consuming...

Ya.  I can see a real advantage of just having the operation happen 
without re-displaying (printing) could be beneficial.

Aside:  I'm reminded of a time when I edited a big (for the time) file 
(< 1MB) with edit on a 386.  I had it do a search for a shorter string 
and replace it with a longer string.  I could literally watch as it 
brought the line onto (usually the top of) the screen, (re)drawing the 
rest of the screen, do the substitution, which caused bumping subsequent 
text, which meant redrawing the remainder of the screen, then finding 
the next occurrence on screen, and repeating.  I ended up walking away 
as that operation couldn't be canceled and took about 20 minutes to run. 
  Oy vey.

> PS:  There are IDEs that make quickly finding the definitions of a 
> function from its uses, or vice versa, much easier now.

I think that it's highly contextually sensitive and really a sub-set of 
what scripts can do.

> But I think it falls short of being an abstraction mechanism the way 
> editor scripts were...

Agreed.

Once you know the process that's being done, you can alter it to work 
for any other thing that you want to do.

> In particular, you can't put such mouse clicks into a file and run them 
> on a bunch of tiles...

Oh ... I'm fairly certain that there are ways to script mouse clicks. 
But that's an entirely different level of annoyance.  One of which 
usually requires things to retain focus or at least not be covered by 
other windows.



-- 
Grant. . . .
unix || die

-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 3982 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://minnie.tuhs.org/pipermail/coff/attachments/20180706/0de8648e/attachment-0001.bin>


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Other OSes?
  2018-07-05  6:40 ` bakul
  2018-07-05 15:23   ` clemc
  2018-07-05 22:51   ` ewayte
@ 2018-07-08 20:31   ` perry
  2018-07-08 20:53     ` perry
  2018-07-09  2:44     ` crossd
  2 siblings, 2 replies; 61+ messages in thread
From: perry @ 2018-07-08 20:31 UTC (permalink / raw)


On Jul 4, 2018, at 10:56 PM, Warren Toomey <wkt at tuhs.org> wrote:
> 
> OK, I guess I'll be the one to start things going on the COFF
> list.
>
> What other features, ideas etc. were available in other operating
> systems which Unix should have picked up but didn't?

A fascinating topic!

On Wed, 4 Jul 2018 23:40:36 -0700 Bakul Shah <bakul at bitblocks.com>
wrote:
> - Capabilities (a number of OSes implemented them -- See Hank
> Levy's book: https://homes.cs.washington.edu/~levy/capabook/

A fascinating resurrection of capabilities in a kind of pleasant
Unixy-way can be found in Robert Watson's "Capsicum" add-ons for
FreeBSD and Linux. I wish it was more widely known about and adopted.

> - Namespaces (plan9)

A good choice I think.

Perry
-- 
Perry E. Metzger		perry at piermont.com


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Other OSes?
  2018-07-05  5:56 [COFF] Other OSes? wkt
                   ` (3 preceding siblings ...)
  2018-07-06  4:04 ` grog
@ 2018-07-08 20:50 ` perry
  2018-07-08 23:27   ` bakul
  4 siblings, 1 reply; 61+ messages in thread
From: perry @ 2018-07-08 20:50 UTC (permalink / raw)


On Thu, 5 Jul 2018 15:56:50 +1000 Warren Toomey <wkt at tuhs.org> wrote:
> OK, I guess I'll be the one to start things going on the COFF list.
> 
> What other features, ideas etc. were available in other operating
> systems which Unix should have picked up but didn't?
> 
> [ Yes, I know, it's on-topic for TUHS but I doubt it will be for
> long! ]

A minor feature that I might mention: TOPS-20 CMND JSYS style command
completion. TL;DR, this feature could now be implemented, as after
decades of wanting it I finally know how to do it in a unixy way.

In TOPS-20, any time you were at the EXEC (the equivalent of the
shell), you could hit "?" and the thing would tell you what options
there were for the next thing you could type, and you could hit ESC to
complete the current thing. This was Very Very Nice, as flags and
options to programs were all easily discoverable and you had a handy
reminder mechanism when you forgot what you wanted.

bash has some vague lame equivalents of this (it will complete
filenames if you hit tab etc.), and if you write special scripts you
can add domain knowledge into bash of particular programs to allow for
special application-specific completion, but overall it's kind of lame.

Here's the Correct Way to implement this: have programs implement a
special flag that allows them to tell the shell how to do completion
for them! I got this idea from this feature being hacked in, in an ad
hoc way, into clang:

http://blog.llvm.org/2017/09/clang-bash-better-auto-completion-is.html

but it is apparent that with a bit of work, one could standardize such
a feature and allow nearly any program to provide the shell with such
information, which would be very cool. Best of all, it's still unixy
in spirit (IMHO).

Kudos to the LLVM people for figuring out the right way to do
this. I'd been noodling on it since maybe 1984 without any real
success.

Perry
-- 
Perry E. Metzger		perry at piermont.com


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Other OSes?
  2018-07-08 20:31   ` perry
@ 2018-07-08 20:53     ` perry
  2018-07-09  2:44     ` crossd
  1 sibling, 0 replies; 61+ messages in thread
From: perry @ 2018-07-08 20:53 UTC (permalink / raw)


On Sun, 8 Jul 2018 16:31:50 -0400 "Perry E. Metzger"
<perry at piermont.com> wrote:
> On Wed, 4 Jul 2018 23:40:36 -0700 Bakul Shah <bakul at bitblocks.com>
> wrote:
> > - Capabilities (a number of OSes implemented them -- See Hank
> > Levy's book: https://homes.cs.washington.edu/~levy/capabook/  
> 
> A fascinating resurrection of capabilities in a kind of pleasant
> Unixy-way can be found in Robert Watson's "Capsicum" add-ons for
> FreeBSD and Linux. I wish it was more widely known about and
> adopted.

Thought I might mention where to find more information on that:

Paper:
https://www.usenix.org/legacy/event/sec10/tech/full_papers/Watson.pdf

Presentation at Usenix Security:
https://www.youtube.com/watch?v=raNx9L4VH2k

Web page:
https://www.cl.cam.ac.uk/research/security/capsicum/documentation.html


-- 
Perry E. Metzger		perry at piermont.com


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Other OSes?
  2018-07-08 20:50 ` [COFF] Other OSes? perry
@ 2018-07-08 23:27   ` bakul
  2018-07-09  0:00     ` grog
                       ` (2 more replies)
  0 siblings, 3 replies; 61+ messages in thread
From: bakul @ 2018-07-08 23:27 UTC (permalink / raw)


On Jul 8, 2018, at 1:50 PM, Perry E. Metzger <perry at piermont.com> wrote:
> 
> On Thu, 5 Jul 2018 15:56:50 +1000 Warren Toomey <wkt at tuhs.org> wrote:
>> OK, I guess I'll be the one to start things going on the COFF list.
>> 
>> What other features, ideas etc. were available in other operating
>> systems which Unix should have picked up but didn't?
>> 
>> [ Yes, I know, it's on-topic for TUHS but I doubt it will be for
>> long! ]
> 
> A minor feature that I might mention: TOPS-20 CMND JSYS style command
> completion. TL;DR, this feature could now be implemented, as after
> decades of wanting it I finally know how to do it in a unixy way.
> 
> In TOPS-20, any time you were at the EXEC (the equivalent of the
> shell), you could hit "?" and the thing would tell you what options
> there were for the next thing you could type, and you could hit ESC to
> complete the current thing. This was Very Very Nice, as flags and
> options to programs were all easily discoverable and you had a handy
> reminder mechanism when you forgot what you wanted.
> 
> bash has some vague lame equivalents of this (it will complete
> filenames if you hit tab etc.), and if you write special scripts you
> can add domain knowledge into bash of particular programs to allow for
> special application-specific completion, but overall it's kind of lame.
> 
> Here's the Correct Way to implement this: have programs implement a
> special flag that allows them to tell the shell how to do completion
> for them! I got this idea from this feature being hacked in, in an ad
> hoc way, into clang:
> 
> http://blog.llvm.org/2017/09/clang-bash-better-auto-completion-is.html
> 
> but it is apparent that with a bit of work, one could standardize such
> a feature and allow nearly any program to provide the shell with such
> information, which would be very cool. Best of all, it's still unixy
> in spirit (IMHO).

I believe autocompletion has been available for 20+ years. IIRC, I
switched to zsh in 1995 and it has had autocompletion then. But you
do have to teach zsh/bash how to autocomplete for a given program.
For instance

  compctl -K listsysctls sysctl
  listsysctls() { set -A reply $(sysctl -AN ${1%.*}) }

The compctl tells the shell what keyword list to use (lowercase k)
or command to use to generate such a list (uppercase K). Then the
command has to figure out how to generate such a list given a prefix.

This sort of magic incantation is needed because no one has bothered
to create a simple library for autocompletion & no standard convention
has sprung up that a program can use. It is not entirely trivial but
not difficult either. Cisco's CLI is a great model for this. It would
even prompt you with a help string for non-keyword args such as ip
address or host! With Ciscso CLI ^D to list choices, ^I to try auto
complete and ? to provide context sensitive help. It was even better
in that you can get away with just typing a unique prefix. No need to
hit <tab>. Very handy for interactive use.





^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Other OSes?
  2018-07-08 23:27   ` bakul
@ 2018-07-09  0:00     ` grog
  2018-07-09  0:13       ` perry
  2018-07-09  0:05     ` crossd
  2018-07-09  0:11     ` perry
  2 siblings, 1 reply; 61+ messages in thread
From: grog @ 2018-07-09  0:00 UTC (permalink / raw)


On Sunday,  8 July 2018 at 16:27:54 -0700, Bakul Shah wrote:
> I believe autocompletion has been available for 20+ years.

Yes, I started using bash in 1990, and it had autocompletion then.  A
colleague was using it even earlier, and pointed out to me
autocompletion and Emacs-style editing as one of the great advantages.

Greg
--
Sent from my desktop computer.
Finger grog at lemis.com for PGP public key.
See complete headers for address and phone numbers.
This message is digitally signed.  If your Microsoft mail program
reports problems, please read http://lemis.com/broken-MUA
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 163 bytes
Desc: not available
URL: <http://minnie.tuhs.org/pipermail/coff/attachments/20180709/c79afa69/attachment-0001.sig>


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Other OSes?
  2018-07-08 23:27   ` bakul
  2018-07-09  0:00     ` grog
@ 2018-07-09  0:05     ` crossd
  2018-07-09  0:56       ` lm
  2018-07-09  0:11     ` perry
  2 siblings, 1 reply; 61+ messages in thread
From: crossd @ 2018-07-09  0:05 UTC (permalink / raw)


On Sun, Jul 8, 2018 at 7:28 PM Bakul Shah <bakul at bitblocks.com> wrote:

> [...]
> I believe autocompletion has been available for 20+ years. IIRC, I
> switched to zsh in 1995 and it has had autocompletion then. But you
> do have to teach zsh/bash how to autocomplete for a given program.
>

csh has had filename auto-completion since the late 70s or early 80s,
though nowhere as rich or as full-featured as bash/zsh, let alone TOPS-20.

[...]
>
> This sort of magic incantation is needed because no one has bothered
> to create a simple library for autocompletion & no standard convention
> has sprung up that a program can use. It is not entirely trivial but
> not difficult either.


This. Much of the issue with Unix was convention, or rather, lack of a
consistent convention. Proponents of DEC operating systems that I've known
decry that Unix can't do stuff like, `RENAME *.FTN *.FOR`, because the
shell does wildcard expansion. Unix people I know will retort that that
behavior depends on programming against specific libraries. I think the big
difference is that the DEC systems really encouraged using those libraries
and made their use idiomatic and trivial; it was such a common convention
that NOT doing it that way was extraordinary. On the other hand, Unix never
mandated any specific way to do these things; as a result, everyone their
own thing. How many times have you looked at a Unix utility of, er, a
certain age, and seen `main()` start something like:

main(argc, argv)
        int argc;
        char *argv[];
{
        if (--argc > 0 && *argv[1] == '-') {
                argv++;
                while (*++*argv)
                switch (**argv) {
                case 'a':
                        /* etc.... */
                        continue;
                }
        }
        /* And so on.... */

I mean, goodness: we didn't even use getopt(3)! It was all hand-rolled! And
thus inconsistent.

Cisco's CLI is a great model for this. It would
> even prompt you with a help string for non-keyword args such as ip
> address or host! With Ciscso CLI ^D to list choices, ^I to try auto
> complete and ? to provide context sensitive help. It was even better
> in that you can get away with just typing a unique prefix. No need to
> hit <tab>. Very handy for interactive use.


This is unsurprising as the Cisco CLI is very clearly modeled after
TOPS-20/TENEX which did all of those things (much of which was in the CMND
JSYS after work transferred to DEC and TENEX became TOPS-20). It's
definitely one of the cooler features of twenex.

        - Dan C.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/coff/attachments/20180708/5384ace8/attachment.html>


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Other OSes?
  2018-07-08 23:27   ` bakul
  2018-07-09  0:00     ` grog
  2018-07-09  0:05     ` crossd
@ 2018-07-09  0:11     ` perry
  2018-07-09  0:19       ` crossd
  2 siblings, 1 reply; 61+ messages in thread
From: perry @ 2018-07-09  0:11 UTC (permalink / raw)


On Sun, 8 Jul 2018 16:27:54 -0700 Bakul Shah <bakul at bitblocks.com>
wrote:
> On Jul 8, 2018, at 1:50 PM, Perry E. Metzger <perry at piermont.com>
> wrote:
> > 
> > On Thu, 5 Jul 2018 15:56:50 +1000 Warren Toomey <wkt at tuhs.org>
> > wrote:  
> >> OK, I guess I'll be the one to start things going on the COFF
> >> list.
> >> 
> >> What other features, ideas etc. were available in other operating
> >> systems which Unix should have picked up but didn't?
> >> 
> >> [ Yes, I know, it's on-topic for TUHS but I doubt it will be for
> >> long! ]  
> > 
> > A minor feature that I might mention: TOPS-20 CMND JSYS style
> > command completion. TL;DR, this feature could now be implemented,
> > as after decades of wanting it I finally know how to do it in a
> > unixy way.
> > 
> > In TOPS-20, any time you were at the EXEC (the equivalent of the
> > shell), you could hit "?" and the thing would tell you what
> > options there were for the next thing you could type, and you
> > could hit ESC to complete the current thing. This was Very Very
> > Nice, as flags and options to programs were all easily
> > discoverable and you had a handy reminder mechanism when you
> > forgot what you wanted.
> > 
> > bash has some vague lame equivalents of this (it will complete
> > filenames if you hit tab etc.), and if you write special scripts
> > you can add domain knowledge into bash of particular programs to
> > allow for special application-specific completion, but overall
> > it's kind of lame.
> > 
> > Here's the Correct Way to implement this: have programs implement
> > a special flag that allows them to tell the shell how to do
> > completion for them! I got this idea from this feature being
> > hacked in, in an ad hoc way, into clang:
> > 
> > http://blog.llvm.org/2017/09/clang-bash-better-auto-completion-is.html
> > 
> > but it is apparent that with a bit of work, one could standardize
> > such a feature and allow nearly any program to provide the shell
> > with such information, which would be very cool. Best of all,
> > it's still unixy in spirit (IMHO).  
> 
> I believe autocompletion has been available for 20+ years. IIRC, I
> switched to zsh in 1995 and it has had autocompletion then. But you
> do have to teach zsh/bash how to autocomplete for a given program.
> For instance

Yes, that's the point. You have to write it for the programs and the
programs have no way to convey to the shell what they want. this
fixes that.

> This sort of magic incantation is needed because no one has bothered
> to create a simple library for autocompletion & no standard
> convention has sprung up that a program can use.

Yes, I know. That's exactly what I'm explaining. Read the URL above.
It describes a quite general mechanism for a program to convey to the
shell, without needing any special binary support, how autocompletion
should work.

Perry
-- 
Perry E. Metzger		perry at piermont.com


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Other OSes?
  2018-07-09  0:00     ` grog
@ 2018-07-09  0:13       ` perry
  0 siblings, 0 replies; 61+ messages in thread
From: perry @ 2018-07-09  0:13 UTC (permalink / raw)


On Mon, 9 Jul 2018 10:00:46 +1000 Greg 'groggy' Lehey
<grog at lemis.com> wrote:
> On Sunday,  8 July 2018 at 16:27:54 -0700, Bakul Shah wrote:
> > I believe autocompletion has been available for 20+ years.  
> 
> Yes, I started using bash in 1990, and it had autocompletion then.
> A colleague was using it even earlier, and pointed out to me
> autocompletion and Emacs-style editing as one of the great
> advantages.

Yes, I know. But it only autocompletes file names, not program flags
and arguments etc., unless you write special bash scripts for every
single program examining what its arguments are. The point of what I
wrote is that there is now a good idea about how programs can convey
their arguments to the shell, thus allowing a general fix to the
issue.

-- 
Perry E. Metzger		perry at piermont.com


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Other OSes?
  2018-07-09  0:11     ` perry
@ 2018-07-09  0:19       ` crossd
  2018-07-09  2:00         ` bakul
  2018-07-09 13:13         ` perry
  0 siblings, 2 replies; 61+ messages in thread
From: crossd @ 2018-07-09  0:19 UTC (permalink / raw)


On Sun, Jul 8, 2018 at 8:12 PM Perry E. Metzger <perry at piermont.com> wrote:

> On Sun, 8 Jul 2018 16:27:54 -0700 Bakul Shah <bakul at bitblocks.com>
> wrote:
> [snip]
> > This sort of magic incantation is needed because no one has bothered
> > to create a simple library for autocompletion & no standard
> > convention has sprung up that a program can use.
>
> Yes, I know. That's exactly what I'm explaining. Read the URL above.
> It describes a quite general mechanism for a program to convey to the
> shell, without needing any special binary support, how autocompletion
> should work.


I read that article and it wasn't clear to me that the `--autocomplete`
argument sent anything back to the shell. I suppose you could use it with
bash/zsh style completion script to get something analogous to
context-sensitive help, but it relies on exec'ing the command (clang or
whatever) and getting output from it that someone or something can parse
and present back to the user in the form of a partial command line; the
examples they had seemed to be when invoking from within emacs and an
X-based program to build up a command.

This is rather unlike how TOPS-20 did it, wherein an image was run-up in
the user's process (effectively, the program was exec'ed, though under
TOPS-20 exec didn't overwrite the image of the currently running program
like the shell) and asked about completions. The critical difference is
that under TOPS-20 context-sensitive help and completions were provided by
the already-runnable/running program. In Unix, the program must be
re-exec'd. It's not a terrible model, but it's very different in
implementation.

        - Dan C.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/coff/attachments/20180708/0d3c0a9c/attachment.html>


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Other OSes?
  2018-07-09  0:05     ` crossd
@ 2018-07-09  0:56       ` lm
  2018-07-09  2:23         ` crossd
  0 siblings, 1 reply; 61+ messages in thread
From: lm @ 2018-07-09  0:56 UTC (permalink / raw)


On Sun, Jul 08, 2018 at 08:05:29PM -0400, Dan Cross wrote:
> On Sun, Jul 8, 2018 at 7:28 PM Bakul Shah <bakul at bitblocks.com> wrote:
> > [...]
> > I believe autocompletion has been available for 20+ years. IIRC, I
> > switched to zsh in 1995 and it has had autocompletion then. But you
> > do have to teach zsh/bash how to autocomplete for a given program.
> 
> csh has had filename auto-completion since the late 70s or early 80s,
> though nowhere as rich or as full-featured as bash/zsh, let alone TOPS-20.

Yeah, I'm gonna go with what I learned about TOPS-20, I didn't know that,
that's way way better than any auto complition I've seen on Unix.

> This. Much of the issue with Unix was convention, or rather, lack of a
> consistent convention. Proponents of DEC operating systems that I've known
> decry that Unix can't do stuff like, `RENAME *.FTN *.FOR`, because the

http://mcvoy.com/lm/move

fixes that.  Since around the late 80's.

> main(argc, argv)
>         int argc;
>         char *argv[];
> {
>         if (--argc > 0 && *argv[1] == '-') {
>                 argv++;
>                 while (*++*argv)
>                 switch (**argv) {
>                 case 'a':
>                         /* etc.... */
>                         continue;
>                 }
>         }
>         /* And so on.... */
> 
> I mean, goodness: we didn't even use getopt(3)! It was all hand-rolled! And
> thus inconsistent.

Gotta agree with this one.  Getopt should have been a thing from day one.
We rolled our own that I like:

http://repos.bkbits.net/bk/dev/src/libc/utils/getopt.c?PAGE=anno&REV=56cf7e34186wKr7L6Lpntw_hwahS0A



^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Other OSes?
  2018-07-06 15:38       ` gtaylor
@ 2018-07-09  1:56         ` tytso
  2018-07-09  3:25           ` gtaylor
  2018-07-09 11:24           ` perry
  0 siblings, 2 replies; 61+ messages in thread
From: tytso @ 2018-07-09  1:56 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 2506 bytes --]

On Fri, Jul 06, 2018 at 09:38:12AM -0600, Grant Taylor wrote:
> > With the advent of glass teletypes, shell scripts simply evaporated --
> > there was no equivalent.  (yes, there were programs like sed, but it
> > wasn't the same...).  Changing, e.g., a function name oin 10 files got a
> > lot more tedious.
> 
> I don't understand that at all.  How did glass ttys (screens) change what
> people do / did on unix?

I never used Unix on teletypes; when I was using an ASR-35 and a
KSR-33 teletype, it was connected to a PDP-8/i and PDP-15/30, although
both did have a line editor that was very similar to /bin/ed.  (This
is why to this day if I'm on a slow link or am running in a reduced
rescue environment, I fall back to /bin/ed, not /bin/vi --- my finger
macros are more efficient using /bin/ed than /bin/vi.)

At least for me, the huge difference that made a difference to how I
would use a computer primarily had to do with speed that could be sent
from a computer.  So even when using a glass tty, if there was 300 or
1200 bps modem between me and the computer, I would be much more
likely to use editor scripts --- and certainly, I'd be much more
likely to use a line editor than anything curses-oriented, whether
it's vim or emacs.

I'd also be much more thoughtful about figuring out how to carefully
do a global search and replace in a way that wouldn't accidentally
make the wrong change.  Forcing myself to think for a minute or two
about how do clever global search and replaces was well worth it when
there was a super-thin pipe between me and the computer.  These days,
I'll just use emacs's query-replace, which will allow me to approve
each change in context, either for each change, or once I'm confident
that I got the simple-search-and-replace, or regexp-search-and-replace
right, have it do the rest of the changes w/o approval.

> What could / would you do at a shell prompt pre-glass-TTYs that you can't do
> the same now with glass-TTYs?

It's not what you *can't* do with a glass-tty.  It's just that with a
glass-tty, I'm much more likely to rely on incremental searches of my
bash command-line history to execute previous commands, possibly with
some changes, because it's more convenient than firing up an editor
and creating a shell script.

But there have been times, even recently, when I've been stuck behind
a slow link (say, because of a crappy hotel network), where I'll find
myself reverting, at least partially, to my old teletype / 1200 modem
habits.

						- Ted


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Other OSes?
  2018-07-09  0:19       ` crossd
@ 2018-07-09  2:00         ` bakul
  2018-07-09  3:02           ` [COFF] Origination of awful security design [COFF, COFF] bill
                             ` (2 more replies)
  2018-07-09 13:13         ` perry
  1 sibling, 3 replies; 61+ messages in thread
From: bakul @ 2018-07-09  2:00 UTC (permalink / raw)


On Jul 8, 2018, at 5:19 PM, Dan Cross <crossd at gmail.com> wrote:
> 
> On Sun, Jul 8, 2018 at 8:12 PM Perry E. Metzger <perry at piermont.com> wrote:
> On Sun, 8 Jul 2018 16:27:54 -0700 Bakul Shah <bakul at bitblocks.com>
> wrote:
> [snip]
> > This sort of magic incantation is needed because no one has bothered
> > to create a simple library for autocompletion & no standard
> > convention has sprung up that a program can use.
> 
> Yes, I know. That's exactly what I'm explaining. Read the URL above.
> It describes a quite general mechanism for a program to convey to the
> shell, without needing any special binary support, how autocompletion
> should work.
> 
> I read that article and it wasn't clear to me that the `--autocomplete` argument sent anything back to the shell. I suppose you could use it with bash/zsh style completion script to get something analogous to context-sensitive help, but it relies on exec'ing the command (clang or whatever) and getting output from it that someone or something can parse and present back to the user in the form of a partial command line; the examples they had seemed to be when invoking from within emacs and an X-based program to build up a command.

It wasn't clear to me either. I looked at the clang webpage again
and all I see is autocompletion support for clang itself. [In a way
this points to the real problem of clang having too many options]

> This is rather unlike how TOPS-20 did it, wherein an image was run-up in the user's process (effectively, the program was exec'ed, though under TOPS-20 exec didn't overwrite the image of the currently running program like the shell) and asked about completions. The critical difference is that under TOPS-20 context-sensitive help and completions were provided by the already-runnable/running program. In Unix, the program must be re-exec'd. It's not a terrible model, but it's very different in implementation.

When we did cisco CLI support in our router product, syntax for all
the commands was factored out. So for example

show.ip
  help IP information

show.ip.route
  help IP routing table
  exec cmd_showroutes()

show.ip.route.ADDR
  help IP address
  verify verify_ipaddr()
  exec cmd_showroutes()

etc. This was a closed universe of commands so easy to extend.

Perhaps something similar can be done, where in a command src dir
you also store allowed syntax. This can be compiled to a syntax
tree and attached to the binary using some convention. The cmd
binary need not deal with this (except perhaps when help or verify
required cmd specific support).

I ended up replicating this sort of thing twice since then.


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Other OSes?
  2018-07-09  0:56       ` lm
@ 2018-07-09  2:23         ` crossd
  0 siblings, 0 replies; 61+ messages in thread
From: crossd @ 2018-07-09  2:23 UTC (permalink / raw)


On Sun, Jul 8, 2018 at 8:57 PM Larry McVoy <lm at mcvoy.com> wrote:

> On Sun, Jul 08, 2018 at 08:05:29PM -0400, Dan Cross wrote:
> > This. Much of the issue with Unix was convention, or rather, lack of a
> > consistent convention. Proponents of DEC operating systems that I've
> known
> > decry that Unix can't do stuff like, `RENAME *.FTN *.FOR`, because the
>
> http://mcvoy.com/lm/move
>
> fixes that.  Since around the late 80's.
>

Oh sure, you can do it. Quoting the glob patterns to avoid shell expansion
is trivial, and then writing a command to expand the patterns oneself (as
you've clearly done) isn't too bad.

But that's not the point. The point is that there was no standard way to do
it: is there a command to translate character sets, such as `transcs
*.latin1 *.utf8` ?

Incidentally, I think that `move` sort of supports the thesis: I see you
pulled in Ozan's regex and Guido's globbing code and _distributed them with
move_. This latter part is important: I imagine you did that because the
functionality wasn't part of the base operating system image.

(Pretty cool commands, by the way; I'm going to pull that one down locally.)

> main(argc, argv)
> >         int argc;
> >         char *argv[];
> > {
> >         if (--argc > 0 && *argv[1] == '-') {
> >                 argv++;
> >                 while (*++*argv)
> >                 switch (**argv) {
> >                 case 'a':
> >                         /* etc.... */
> >                         continue;
> >                 }
> >         }
> >         /* And so on.... */
> >
> > I mean, goodness: we didn't even use getopt(3)! It was all hand-rolled!
> And
> > thus inconsistent.
>
> Gotta agree with this one.  Getopt should have been a thing from day one.
> We rolled our own that I like:
>
>
> http://repos.bkbits.net/bk/dev/src/libc/utils/getopt.c?PAGE=anno&REV=56cf7e34186wKr7L6Lpntw_hwahS0A


That's pretty cool.

But again, I don't think it's that these things weren't *possible* under
Unix (obviously they were and are) but that they weren't _conventional_. I
don't think a VAX running VMS would have immolated itself if you didn't use
the VMS routines to do wildcard processing, but it would probably have been
considered strange. Under Unix that was the norm.

        - Dan C.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/coff/attachments/20180708/8cda4559/attachment.html>


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Other OSes?
  2018-07-08 20:31   ` perry
  2018-07-08 20:53     ` perry
@ 2018-07-09  2:44     ` crossd
  2018-07-10  5:30       ` bakul
  1 sibling, 1 reply; 61+ messages in thread
From: crossd @ 2018-07-09  2:44 UTC (permalink / raw)


On Sun, Jul 8, 2018 at 4:31 PM Perry E. Metzger <perry at piermont.com> wrote:

> [snip]
> On Wed, 4 Jul 2018 23:40:36 -0700 Bakul Shah <bakul at bitblocks.com>
> wrote:
> > - Capabilities (a number of OSes implemented them -- See Hank
> > Levy's book: https://homes.cs.washington.edu/~levy/capabook/
>
> A fascinating resurrection of capabilities in a kind of pleasant
> Unixy-way can be found in Robert Watson's "Capsicum" add-ons for
> FreeBSD and Linux. I wish it was more widely known about and adopted.
>
> > - Namespaces (plan9)
>
> A good choice I think.


Interestingly, it was in talking with Ben Laurie about a potential security
model for Akaros that I realized the correspondence between Plan9-style
namespaces and capabilities. Since Akaros had imported a lot of plan9
namespace code, we concluded that we already had the building blocks for a
capability-style security model with minimal additional work. I
subsequently concluded that Capsicum wasn't strong enough to be complete.
By way of example, I refer back to my earlier question about networking.
For example, rights(4) on FreeBSD lists CAP_BIND, CAP_CONNECT, CAP_ACCEPT
and other socket related capabilities. That's fine for modeling whether a
process can make a *system call*, but insufficient for validating the
parameters of that system call. Being granted CAP_CONNECT lets me call
connect(2), but how do I model a capability for establishing a TCP
connection to some restricted set of ports on a restricted set of
destination hosts? CAP_BIND let's me call bind(2), but is there some
mechanism to represent the capability of only being able to bind(2) to TCP
port 12345? I imagine I'd want to make it so that I can restrict the
network 5-tuple a particular process can interact with based on some local
policy, but Capsicum as it stands seems too coarse-grained for that.
Similarly with file open's, etc.

Curiously, though it wasn't specifically designed for it, but it seems like
the namespace-style representation of capabilities would be simultaneously
more Unix-like and stronger: since the OS only has system calls for
interacting with file-like objects (and processes, but ignore those for a
moment) in the current namespace, a namespace can be constructed with
*exactly* the resources the process can interact with. Given that
networking was implemented as a filesystem, one can imagine interposing a
proxy filesystem that represents the network stack to a process, but
imposes policy restrictions on how the stack is used (e.g., inspecting
connect establishment requests and restricting them to a set of 5-tuples,
etc).

        - Dan C.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/coff/attachments/20180708/25ff8ef9/attachment.html>


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Other OSes?
  2018-07-06  5:42   ` bakul
@ 2018-07-09  2:51     ` crossd
  2018-07-10  5:41       ` bakul
  0 siblings, 1 reply; 61+ messages in thread
From: crossd @ 2018-07-09  2:51 UTC (permalink / raw)


On Fri, Jul 6, 2018 at 1:43 AM Bakul Shah <bakul at bitblocks.com> wrote:

> [snip some very interesting and insightful comments]
> Mill ideas are very much worth exploring.  It will be possible
> to build highly secure systems with it -- if it ever gets
> sufficiently funded and built!  IMHO layers of mapping as with
> virtualization/containerization are not really needed for
> better security or isolation.
>

Sure, with emphasis on that "if it ever gets sufficiently funded and
built!" part. :-) It sounds cool, but what to do on extant hardware?
Similarly with CHERI: they change nearly everything (including the
hardware).

> 2. Is mmap() *really* the best we can do for mapping arbitrary resources
> > into an address space?
>
> I think this is fine.  Even remote objects mmapping should
> work!
>

Sure, but is it the *best* we can do? Subjectively, the interface is pretty
ugly, and we're forced into a multi-level store. Maybe that's OK; it sure
seems like we haven't come up with anything better. But I wonder whether
that's because we've found some local maxima in our pursuit of
functionality vs cost, or because we're so stuck in the model of
multi-level stores and mapping objects into address spaces that we can't
see beyond it. And it sure would be nice if the ergonomics of the
programming interface were better.

> 3. A more generalized message passing system would be cool. Something
> where
> > you could send a message with a payload somewhere in a synchronous way
> > would be nice (perhaps analogous to channels). VMS-style mailboxes would
> > have been neat.
>
> Erlang. Carl Hewitt's Actor model has this.
>
> [1]
> http://tierra.aslab.upm.es/~sanz/cursos/DRTS/AlphaRtDistributedKernel.pdf


I'm going to read that paper, but it's at least a couple of decades old
(one of the authors is affiliated with DEC).

        - Dan C.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/coff/attachments/20180708/ad34cd4d/attachment-0001.html>


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Origination of awful security design [COFF, COFF]
  2018-07-09  2:00         ` bakul
@ 2018-07-09  3:02           ` bill
  2018-07-09 13:10           ` [COFF] Other OSes? david
  2018-07-09 13:17           ` perry
  2 siblings, 0 replies; 61+ messages in thread
From: bill @ 2018-07-09  3:02 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 3536 bytes --]

Okay, who or what organization did it?  A wretched security issue that remains in many UNIX variants even to this day...

Is this an example of boneheaded groupthink or a lone wolf trying to achieve parallel structure in design without thinking of the consequences:

There is a command called “last.”  Typically, when used with -R you can tell the last successful logins on the system along with origination IP.  Very clean command.

Then, presumably, someone else comes along and writes its corollary: lastb

The “lastb” command is the “last” command’s evil twin.  The lastb command is one of the finest examples on the need for taking human nature, if not common sense, into the deliberation process when deciding available commands and functionality.   

The “lastb” command takes whatever was keyed into login that fails to register as a successful login and writes the output to file.  

Thus, any user making a fat finger mistake is certain to key in a  password into the “login:” (log name) request of the login command over time.  

Over time, say three (3) years with even a small user population, passwords will be revealed in the lastb file:  btmp, bwtmp, and so on.   

The problem, of course, is that there is no attempt to encrypt this file used by lastb on many UNIX systems.  Oh, your cron kicks off a job that truncates the files used by lastb every week you say?  That’s playing fast and loose. 

Anyway, this reminds me of a trick a ten year old (the boss’s kid) pranked on everyone in the office any chance he could:

In 1985, this little hacker would silently  stand by your terminal as you return from lunch and just after you enter your response to login:, he would reach over and depress the return key directly after you depressed the return key in response to login. 

Well, that double return key action would enter a return in response to password.  The login command would promptly comply and print login again.    As you keep your momentum of logging in, not really realizing you were just victimized, you press on and enter your password followed by a return key. 

At that point, echo is turned on and surprisingly (to the uninitiated anyway) your password appears on the tube for all to see. 

And, yes, login complies with its security unconscious codebase (assuming you pressed return after your password was displayed) and this further memorializes the nefarious transaction by writing the offense into the file used by lastb.  

For example, a relatively recent version of a well known UNIX still has the lastb command.  It can be disabled.  

Look, if you really want the lastb feature, why not encrypt it.  Or, only allow lastb entries where a particular login exists on the system.  (Although, this too is an awful practice as you clue the hacker into valid logins.  Our Founding Fathers taught us that just counting the milliseconds (now even more granularity than a millisecond is required) of response time can alert a savvy hacker as to the validity of a login. 

Just knowing that a login is valid wins the hacker almost half the battle.  Here, my complaint is that the nearly clear text password is likely placed in the files used by lastb over time.  I realize this functionality can easily be turned off.  But, human nature reminds us that simply having dangerous functionality available is a problem. 

Can anyone on our Forum elucidate the impetus leading to the rationale for the lastb and why it’s warts have been around for so long?  

Bill Corcoran 





^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Other OSes?
  2018-07-09  1:56         ` tytso
@ 2018-07-09  3:25           ` gtaylor
  2018-07-09  3:35             ` crossd
  2018-07-09  5:23             ` tytso
  2018-07-09 11:24           ` perry
  1 sibling, 2 replies; 61+ messages in thread
From: gtaylor @ 2018-07-09  3:25 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 3075 bytes --]

On 07/08/2018 07:56 PM, Theodore Y. Ts'o wrote:
> I never used Unix on teletypes; when I was using an ASR-35 and a KSR-33 
> teletype, it was connected to a PDP-8/i and PDP-15/30, although both 
> did have a line editor that was very similar to /bin/ed.  (This is why 
> to this day if I'm on a slow link or am running in a reduced rescue 
> environment, I fall back to /bin/ed, not /bin/vi --- my finger macros 
> are more efficient using /bin/ed than /bin/vi.)

Please forgive my assumption and ignorance.  What OS ran on the PDP-8/i 
or PDP-15/30?

I fully get falling back to old habits that work well, especially in a 
constrained environment.

> At least for me, the huge difference that made a difference to how I 
> would use a computer primarily had to do with speed that could be sent 
> from a computer.  So even when using a glass tty, if there was 300 or 
> 1200 bps modem between me and the computer, I would be much more likely 
> to use editor scripts --- and certainly, I'd be much more likely to use 
> a line editor than anything curses-oriented, whether it's vim or emacs.

Doing different things based on the (lack of) speed of the connection 
makes complete sense.

> I'd also be much more thoughtful about figuring out how to carefully 
> do a global search and replace in a way that wouldn't accidentally 
> make the wrong change.  Forcing myself to think for a minute or two 
> about how do clever global search and replaces was well worth it when 
> there was a super-thin pipe between me and the computer.  These days, 
> I'll just use emacs's query-replace, which will allow me to approve each 
> change in context, either for each change, or once I'm confident that I 
> got the simple-search-and-replace, or regexp-search-and-replace right, 
> have it do the rest of the changes w/o approval.

In light of the (lack of) speed aspect above, that seems perfectly 
reasonable.

I too do something similar in vi(m) as far as confirming some changes as 
I gain trust that they are doing the proper thing.

> It's not what you *can't* do with a glass-tty.  It's just that with a 
> glass-tty, I'm much more likely to rely on incremental searches of my 
> bash command-line history to execute previous commands, possibly with 
> some changes, because it's more convenient than firing up an editor and 
> creating a shell script.

ACK

> But there have been times, even recently, when I've been stuck behind a 
> slow link (say, because of a crappy hotel network), where I'll find myself 
> reverting, at least partially, to my old teletype / 1200 modem habits.

Fair.

Will you please elaborate on what you mean by "editor scripts"?  That's 
a term that I'm not familiar with.  —  I didn't see an answer to this 
question, so I'm asking again.



-- 
Grant. . . .
unix || die

-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 3982 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://minnie.tuhs.org/pipermail/coff/attachments/20180708/7fe551cd/attachment.bin>


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Other OSes?
  2018-07-09  3:25           ` gtaylor
@ 2018-07-09  3:35             ` crossd
  2018-07-09  3:43               ` gtaylor
  2018-07-09  5:23             ` tytso
  1 sibling, 1 reply; 61+ messages in thread
From: crossd @ 2018-07-09  3:35 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 1766 bytes --]

On Sun, Jul 8, 2018 at 11:24 PM Grant Taylor via COFF <coff at minnie.tuhs.org>
wrote:

> On 07/08/2018 07:56 PM, Theodore Y. Ts'o wrote:
> > [snip]

> At least for me, the huge difference that made a difference to how I
> > would use a computer primarily had to do with speed that could be sent
> > from a computer.  So even when using a glass tty, if there was 300 or
> > 1200 bps modem between me and the computer, I would be much more likely
> > to use editor scripts --- and certainly, I'd be much more likely to use
> > a line
> [snip]
>
> Will you please elaborate on what you mean by "editor scripts"?  That's
> a term that I'm not familiar with.  —  I didn't see an answer to this
> question, so I'm asking again.
>

Back in the days of line editors, which read their commands from the
standard input and were relatively simple programs as far as their user
interface was concerned, you could put a set of editor commands into a file
and run it sort of like a shell script. This way, you could run the same
sequence of commands against (potentially) many files. Think something like:

$ cat >scr.ed
g/unix/s/unix/Unix/g
w
q
^D
$ for f in *.ms; do ed $f << scr.ed; done; unset f
...

Back in the days of teletypes, line editors were of course the only things
we had. When we moved to glass TTYs with cursor addressing we got richer
user interfaces, but with those came more complex input handling (often
reading directly from the terminal in "raw" mode), which meant that
scripting the editor was harder, as you usually couldn't just redirect a
file into its stdin.

        - Dan C.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/coff/attachments/20180708/58c11323/attachment-0001.html>


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Other OSes?
  2018-07-09  3:35             ` crossd
@ 2018-07-09  3:43               ` gtaylor
  2018-07-09  3:52                 ` imp
  2018-07-09 11:34                 ` crossd
  0 siblings, 2 replies; 61+ messages in thread
From: gtaylor @ 2018-07-09  3:43 UTC (permalink / raw)


On 07/08/2018 09:35 PM, Dan Cross wrote:
> Back in the days of line editors, which read their commands from the 
> standard input and were relatively simple programs as far as their user 
> interface was concerned, you could put a set of editor commands into a 
> file and run it sort of like a shell script. This way, you could run the 
> same sequence of commands against (potentially) many files. Think 
> something like:

ACK

I figured that you were referring to something like that.  But I wanted 
to ask in case there was something else that I didn't know about but 
could benefit from knowing.  I.e. vimscript.

> $ cat >scr.ed
> g/unix/s/unix/Unix/g
> w
> q
> ^D
> $ for f in *.ms; do ed $f << scr.ed; done; unset f
> ...

Nice global command.  Run the substitution (globally on the line) on any 
line containing "unix".  I like it.  ;-)

The double << is different than what I would expect.  I wonder if that's 
specific to the shell or appending to the input after the file?

> Back in the days of teletypes, line editors were of course the only 
> things we had. When we moved to glass TTYs with cursor addressing we got 
> richer user interfaces, but with those came more complex input handling 
> (often reading directly from the terminal in "raw" mode), which meant 
> that scripting the editor was harder, as you usually couldn't just 
> redirect a file into its stdin.

That makes sense.  Thank you for the explanation.



-- 
Grant. . . .
unix || die

-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 3982 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://minnie.tuhs.org/pipermail/coff/attachments/20180708/a590e766/attachment.bin>


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Other OSes?
  2018-07-09  3:43               ` gtaylor
@ 2018-07-09  3:52                 ` imp
  2018-07-09 11:32                   ` perry
  2018-07-09 11:34                 ` crossd
  1 sibling, 1 reply; 61+ messages in thread
From: imp @ 2018-07-09  3:52 UTC (permalink / raw)


I've often thought it a mistake to have main() be the only entry point.
There's a simple elegance in that, I'll grant, but if there were a
parse_args() entry point that also allowed for completion that was called
before main, and that you'd only proceed to main on a complete command
line, you could easily put the burdon of command line completion into the
programs. They know that arg3 is a filename or a sub command or a network
interface or an alternate name of cthulhu. tcsh does command completion,
but in a half-assed, gotta know about all the commands and have a giant
table that duplicates all the programs in the system's parsers, which isn't
scalable. It's had that since this 80's as has bash, and both have been
lame equally as long. It would also let the program do 'noise words' like
TOPS-20 did w/o having to actually parse them...

clang --complete is an interesting variation on my ideas within the realm
of doing non-standard weird things and starts to place the burden of
knowledge on the program itself, which is more in line with the thinking of
Unix and the main stream of OOish thought we've know about since the early
70s with smalltalk and other such pioneering things.

Then again, maybe my idea is too much influenced by TOPS-20 commands that
became resident to do command completion, then ran more quickly because
they were already at least half-loaded when the user hit return.

Warner

On Sun, Jul 8, 2018 at 9:43 PM, Grant Taylor via COFF <coff at minnie.tuhs.org>
wrote:

> On 07/08/2018 09:35 PM, Dan Cross wrote:
>
>> Back in the days of line editors, which read their commands from the
>> standard input and were relatively simple programs as far as their user
>> interface was concerned, you could put a set of editor commands into a file
>> and run it sort of like a shell script. This way, you could run the same
>> sequence of commands against (potentially) many files. Think something like:
>>
>
> ACK
>
> I figured that you were referring to something like that.  But I wanted to
> ask in case there was something else that I didn't know about but could
> benefit from knowing.  I.e. vimscript.
>
> $ cat >scr.ed
>> g/unix/s/unix/Unix/g
>> w
>> q
>> ^D
>> $ for f in *.ms; do ed $f << scr.ed; done; unset f
>> ...
>>
>
> Nice global command.  Run the substitution (globally on the line) on any
> line containing "unix".  I like it.  ;-)
>
> The double << is different than what I would expect.  I wonder if that's
> specific to the shell or appending to the input after the file?
>
> Back in the days of teletypes, line editors were of course the only things
>> we had. When we moved to glass TTYs with cursor addressing we got richer
>> user interfaces, but with those came more complex input handling (often
>> reading directly from the terminal in "raw" mode), which meant that
>> scripting the editor was harder, as you usually couldn't just redirect a
>> file into its stdin.
>>
>
> That makes sense.  Thank you for the explanation.
>
>
>
> --
> Grant. . . .
> unix || die
>
>
> _______________________________________________
> COFF mailing list
> COFF at minnie.tuhs.org
> https://minnie.tuhs.org/cgi-bin/mailman/listinfo/coff
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/coff/attachments/20180708/83c040d5/attachment.html>


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Other OSes?
  2018-07-09  3:25           ` gtaylor
  2018-07-09  3:35             ` crossd
@ 2018-07-09  5:23             ` tytso
  2018-07-09 12:52               ` clemc
  1 sibling, 1 reply; 61+ messages in thread
From: tytso @ 2018-07-09  5:23 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 2803 bytes --]

On Sun, Jul 08, 2018 at 09:25:27PM -0600, Grant Taylor via COFF wrote:
> 
> Please forgive my assumption and ignorance.  What OS ran on the PDP-8/i or
> PDP-15/30?

There were multiple OS's for the PDP-8 and PDP-15.  What I used on the
PDP-8/i was the 4k disk monitoring system, so named becasue it only
required 4k of 12-bit wide core memory.  The resident portion of the
OS only required 128 12-bit words, loaded at octal 7600, at the top of
the 4k memory.  It could be bootstrapped by toggling in 4 (12-bit
wide) instructions into the front console which had about 24 binary
switches[1].

[1] https://www.youtube.com/watch?v=yUZrn7qTGcs

The OS was distributed on paper tape, which was loaded by toggling in
18 instructions of the RIM loader, which was then to load the BIN
loader from paper tape into core memory.  The RIM loader was designed
to be simple and easy to toggle into the console.  (ROM?  EPROM?  We
don't need no stink'in firmware in read-only memories!)  The BIN
loader could read in a more effoiciently packed data stored in punched
paper tape.  The BIN loader would then be used to load the disk
builder program which would install the OS into the DF-32 (which
stored 32k 12-bit words on a 12" platter).

Later PDP-8's would run more a sophisticated OS, such as OS/8, which
had a "Concise Command Language" (CCL) that was designed to be similar
to the TOPS-10 system running on the PDP-10.  OS/8 was a single-user
system, though; no time-sharing!


The PDP-15/30 that I used had a paper tape reader and four DECtape
units.  It ran a background-foreground monitor.  The background system
was what was used for normal program development.  The foreground job
had unconditional priority over the background job and was used for
jobs such as real-time data acquisition.  When the
background/foreground OS was started, initially only the foreground
teletype was active.  If you didn't have any foreground job to
execute, you'd start the "idle" program, which once started, would
then cause the background teletype to come alive and print a command
prompt.  So it was a tad bit more sophisticated than the 4k disk
monitor system.

> Will you please elaborate on what you mean by "editor scripts"?  That's a
> term that I'm not familiar with.  —  I didn't see an answer to this
> question, so I'm asking again.

There have been times when I'll do something like this in a shell
script:

#!/bin/sh

for i in file1 file2 file3 ; do
   ed $i << EOF
/^$/
n
s/^Obama/Trump/
w
q
EOF
done

This is a toy example, but hopefully it gets the point across.  There
are times when you don't want to use a stream editor, but instead want
to send a series of editor commands to an editor like /bin/ed.  I
suspect that younger folks would probably use something else, perhaps
perl, instead.

					- Ted


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Other OSes?
  2018-07-09  1:56         ` tytso
  2018-07-09  3:25           ` gtaylor
@ 2018-07-09 11:24           ` perry
  1 sibling, 0 replies; 61+ messages in thread
From: perry @ 2018-07-09 11:24 UTC (permalink / raw)


On Sun, 8 Jul 2018 21:56:50 -0400 "Theodore Y. Ts'o" <tytso at mit.edu>
wrote:
> These days, I'll just use emacs's query-replace, which will allow
> me to approve each change in context, either for each change, or
> once I'm confident that I got the simple-search-and-replace, or
> regexp-search-and-replace right, have it do the rest of the changes
> w/o approval.

I often use a utility called "qsubst" that allows emacs-like query
replace at the command line. I got it off the net around 1990 and
haven't seen it widely distributed, but it's Damn Useful.

On the more general topic: I, too, never used Unix on a printing
terminal (by the time I got to it in the early 1980s everything was
CRTs) and I've used shell scripts pretty consistently over the few
decades. I tend not to write really long ones any more -- the advent
of Perl and then languages like Ruby and Python sort of ended that --
but I write short ones a lot, and I write five-line ones at the bash
prompt several times a day. (One reason why emacs/vi like command
line editing is so useful to me is it lets me quickly hack up a
script at the terminal prompt.)

And yes, if it's got a couple of nested loops and a long pipeline or
two, I think it's still a script even if I type it ad hoc.

> It's not what you *can't* do with a glass-tty.  It's just that with
> a glass-tty, I'm much more likely to rely on incremental searches
> of my bash command-line history to execute previous commands,
> possibly with some changes, because it's more convenient than
> firing up an editor and creating a shell script.

Indeed. That's my work style as well.

Perry
-- 
Perry E. Metzger		perry at piermont.com


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Other OSes?
  2018-07-09  3:52                 ` imp
@ 2018-07-09 11:32                   ` perry
  2018-07-09 11:50                     ` perry
  0 siblings, 1 reply; 61+ messages in thread
From: perry @ 2018-07-09 11:32 UTC (permalink / raw)


On Sun, 8 Jul 2018 21:52:09 -0600 Warner Losh <imp at bsdimp.com> wrote:
> It would also let the program do 'noise
> words' like TOPS-20 did w/o having to actually parse them...

Noise words are a thing Unix is missing, but given the lack of
CMND JSYS style completion, the reason for the lack is obvious --
nothing generates noisewords so nothing needs to ignore them. This is
yet another cool thing clang's --complete hack could make widely
available, though then we'd need a standard for noisewords.

> clang --complete is an interesting variation on my ideas within the
> realm of doing non-standard weird things and starts to place the
> burden of knowledge on the program itself, which is more in line
> with the thinking of Unix and the main stream of OOish thought
> we've know about since the early 70s with smalltalk and other such
> pioneering things.

Precisely. The clang hack is exactly what one would want if it could
be made popular.

Perry
-- 
Perry E. Metzger		perry at piermont.com


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Other OSes?
  2018-07-09  3:43               ` gtaylor
  2018-07-09  3:52                 ` imp
@ 2018-07-09 11:34                 ` crossd
  1 sibling, 0 replies; 61+ messages in thread
From: crossd @ 2018-07-09 11:34 UTC (permalink / raw)


On Sun, Jul 8, 2018 at 11:42 PM Grant Taylor via COFF <coff at minnie.tuhs.org>
wrote:

> [snip]
> The double << is different than what I would expect.  I wonder if that's
> specific to the shell or appending to the input after the file?
>

Actually, that was just a typo; it should have been a single '<'.

The '<<' (double less-than) is how you would introduce a 'here' document,
as Ted did in his example.

        - Dan C.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/coff/attachments/20180709/4be491d0/attachment.html>


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Other OSes?
  2018-07-09 11:32                   ` perry
@ 2018-07-09 11:50                     ` perry
  0 siblings, 0 replies; 61+ messages in thread
From: perry @ 2018-07-09 11:50 UTC (permalink / raw)


On Mon, 9 Jul 2018 07:32:41 -0400 "Perry E. Metzger"
<perry at piermont.com> wrote:
> On Sun, 8 Jul 2018 21:52:09 -0600 Warner Losh <imp at bsdimp.com>
> wrote:
> > It would also let the program do 'noise
> > words' like TOPS-20 did w/o having to actually parse them...  
> 
> Noise words are a thing Unix is missing, but given the lack of
> CMND JSYS style completion, the reason for the lack is obvious --
> nothing generates noisewords so nothing needs to ignore them. This
> is yet another cool thing clang's --complete hack could make widely
> available, though then we'd need a standard for noisewords.

Actually, it occurs to me that noisewords aren't actually needed. The
printed help during completion can handle conveying the information
that noisewords provided.

Perry

> > clang --complete is an interesting variation on my ideas within
> > the realm of doing non-standard weird things and starts to place
> > the burden of knowledge on the program itself, which is more in
> > line with the thinking of Unix and the main stream of OOish
> > thought we've know about since the early 70s with smalltalk and
> > other such pioneering things.  
> 
> Precisely. The clang hack is exactly what one would want if it could
> be made popular.
> 
> Perry

-- 
Perry E. Metzger		perry at piermont.com


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Other OSes?
  2018-07-09  5:23             ` tytso
@ 2018-07-09 12:52               ` clemc
  2018-07-09 13:06                 ` [COFF] PiDP Obsolesces Guaranteed clemc
  2018-07-09 14:39                 ` [COFF] Other OSes? tytso
  0 siblings, 2 replies; 61+ messages in thread
From: clemc @ 2018-07-09 12:52 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 3725 bytes --]

On Mon, Jul 9, 2018 at 1:23 AM Theodore Y. Ts'o <tytso at mit.edu> wrote:

>
> There were multiple OS's for the PDP-8 and PDP-15.

Right.  You can find them on the PDP-8 web sites for instance.  Check out
sigh or better yet the PiDP-8 which I have running at with 4 OS loaded into
the ROMs.  The RIM loader can be loaded from the toggle switches for the
purist, but the Pi will do it for you on power up if you like and don’t
want the tedium.


I used on the
> PDP-8/i was the 4k disk monitoring system, so named becasue it only
> required 4k of 12-bit wide core memory.

Right I believe that was the first widely available OS and described in the
small computer handbook.  Remember mini (as in minicomputer) was term
Gordon coined to mean minimal computer.


The resident portion of the
> OS only required 128 12-bit words, loaded at octal 7600, at the top of
> the 4k memory.  It could be bootstrapped by toggling in 4 (12-bit
> wide) instructions into the front console which had about 24 binary
> switches[1].


>
> [1] https://www.youtube.com/watch?v=yUZrn7qTGcs
>
> The OS was distributed on paper tape, which was loaded by toggling in
> 18 instructions of the RIM loader, which was then to load the BIN
> loader from paper tape into core memory.  The RIM loader was designed
> to be simple and easy to toggle into the console.  (ROM?  EPROM?  We
> don't need no stink'in firmware in read-only memories!)  The BIN
> loader could read in a more effoiciently packed data stored in punched
> paper tape.  The BIN loader would then be used to load the disk
> builder program which would install the OS into the DF-32 (which
> stored 32k 12-bit words on a 12" platter).

rIght and the RIM loaded was printed on front panel of the console.  btw
the disk was 19” in diamete.  Danny Klein has the original disk platter
from the one of the original PDP-8 - before marriage it used to hang in his
living room. FYI that was the system at the computer museum from the EE
Dept after it died in approx ‘75  I was there the disk crashed.


>
> Later PDP-8's would run more a sophisticated OS, such as OS/8, which
> had a "Concise Command Language" (CCL) that was designed to be similar
> to the TOPS-10 system running on the PDP-10.  OS/8 was a single-user
> system, though; no time-sharing!


Be careful grasshopper.   TSS/8 is available on those web sites although I
admit it does not run on my PiDP-8 and I have not figured why (Something is
corrupt and I have not spent the time or energy to chase it).  Anyway,
TSS/8. Supported 4-8 ASR-33 terminals, each had 4K words as you described
before.  There was assembler, basic, focal, Fortran-IV and an Algol circa
1965 extensions.

>
>
>
> The PDP-15/30 that I used had a paper tape reader and four DECtape
> units.  It ran a background-foreground monitor.  The background system
> was what was used for normal program development.  The foreground job
> had unconditional priority over the background job and was used for
> jobs such as real-time data acquisition.  When the
> background/foreground OS was started, initially only the foreground
> teletype was active.  If you didn't have any foreground job to
> execute, you'd start the "idle" program, which once started, would
> then cause the background teletype to come alive and print a command
> prompt.  So it was a tad bit more sophisticated than the 4k disk
> monitor system.

RIght this is really the model for RT11 which would begat CP/M and last
DOS-86 (aka PC-DOS, later renamed MS-DOS).

-- 
Sent from a handheld expect more typos than usual
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/coff/attachments/20180709/058bc651/attachment-0001.html>


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] PiDP Obsolesces Guaranteed
  2018-07-09 12:52               ` clemc
@ 2018-07-09 13:06                 ` clemc
  2018-07-09 14:39                 ` [COFF] Other OSes? tytso
  1 sibling, 0 replies; 61+ messages in thread
From: clemc @ 2018-07-09 13:06 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 1104 bytes --]

BTW.  Just last night before I went to bed, I booted for the first time my
PiDP-11/70. I have one of the first alpha units since I’ve been coaching
Oscar with the project a small amount.  I left it running a some tests last
night and am off to USENIX ATC today, so I have not even tried to boot Unix
on it yet.   One of the cool things about this project, I was able to
arrange for the designer of the original 11/70 front panel to be consulted
(Jim Bleck) on the molds to Oscar who was able to have the molds and the
switches made in Europe.   Anyway, the PiDP-8 is cool and looks the same as
the original 8/I,  but the panel is framed/finished in wood.  For the 11,
the panel is the real thing - DEC white (hopefully will not fade using
modern plastics).    FYI Oscar is considering doing the -10 next but given
the amount of work from the 8 and the 11 it’s more than a hobby.
-- 
Sent from a handheld expect more typos than usual
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/coff/attachments/20180709/31fdd6ed/attachment.html>


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Other OSes?
  2018-07-09  2:00         ` bakul
  2018-07-09  3:02           ` [COFF] Origination of awful security design [COFF, COFF] bill
@ 2018-07-09 13:10           ` david
  2018-07-09 13:17           ` perry
  2 siblings, 0 replies; 61+ messages in thread
From: david @ 2018-07-09 13:10 UTC (permalink / raw)


> When we did cisco CLI support in our router product, syntax for all
> the commands was factored out. So for example
> 
> show.ip
>  help IP information
> 
> show.ip.route
>  help IP routing table
>  exec cmd_showroutes()
> 
> show.ip.route.ADDR
>  help IP address
>  verify verify_ipaddr()
>  exec cmd_showroutes()
> 
> etc. This was a closed universe of commands so easy to extend.
> 
> Perhaps something similar can be done, where in a command src dir
> you also store allowed syntax. This can be compiled to a syntax
> tree and attached to the binary using some convention. The cmd
> binary need not deal with this (except perhaps when help or verify
> required cmd specific support).
> 

This sounds like a resource fork on a file. Much like MacOS (pre X) had for all files. It was a convenient way to keep data that defined the application associated with the application. You could also edit it to customize it for your own use if you wished.

There were a great many good things about the MacOS of old.

	David


> _______________________________________________
> COFF mailing list
> COFF at minnie.tuhs.org
> https://minnie.tuhs.org/cgi-bin/mailman/listinfo/coff



^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Other OSes?
  2018-07-09  0:19       ` crossd
  2018-07-09  2:00         ` bakul
@ 2018-07-09 13:13         ` perry
  1 sibling, 0 replies; 61+ messages in thread
From: perry @ 2018-07-09 13:13 UTC (permalink / raw)


On Sun, 8 Jul 2018 20:19:18 -0400 Dan Cross <crossd at gmail.com> wrote:
> On Sun, Jul 8, 2018 at 8:12 PM Perry E. Metzger
> <perry at piermont.com> wrote:
> 
> > On Sun, 8 Jul 2018 16:27:54 -0700 Bakul Shah <bakul at bitblocks.com>
> > wrote:
> > [snip]  
> > > This sort of magic incantation is needed because no one has
> > > bothered to create a simple library for autocompletion & no
> > > standard convention has sprung up that a program can use.  
> >
> > Yes, I know. That's exactly what I'm explaining. Read the URL
> > above. It describes a quite general mechanism for a program to
> > convey to the shell, without needing any special binary support,
> > how autocompletion should work.  
> 
> 
> I read that article and it wasn't clear to me that the
> `--autocomplete` argument sent anything back to the shell.

The shell autocompletion script parses the output. That's the
point of the facility. It lets you get a TOPS-20 style completion
model where you get completion and help on all options, but the shell
doesn't need to be given that information ad hoc, it gets it from the
program through the use of the --autocomplete flag.

> I
> suppose you could use it with bash/zsh style completion script to
> get something analogous to context-sensitive help,

That's the sole purpose of it.

> but it relies on
> exec'ing the command (clang or whatever) and getting output from it
> that someone or something can parse and present back to the user in
> the form of a partial command line;

That's what it does, yes.

> This is rather unlike how TOPS-20 did it, wherein an image was
> run-up in the user's process (effectively, the program was exec'ed,
> though under TOPS-20 exec didn't overwrite the image of the
> currently running program like the shell) and asked about
> completions.

In this case, the program is run and asked about completions. It's a
somewhat different mechanism, but Unix is a different operating
system, and in the modern world, the time involved is ignorable.

Perry
-- 
Perry E. Metzger		perry at piermont.com


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Other OSes?
  2018-07-09  2:00         ` bakul
  2018-07-09  3:02           ` [COFF] Origination of awful security design [COFF, COFF] bill
  2018-07-09 13:10           ` [COFF] Other OSes? david
@ 2018-07-09 13:17           ` perry
  2 siblings, 0 replies; 61+ messages in thread
From: perry @ 2018-07-09 13:17 UTC (permalink / raw)


On Sun, 8 Jul 2018 19:00:31 -0700 Bakul Shah <bakul at bitblocks.com>
wrote:
> It wasn't clear to me either. I looked at the clang webpage again
> and all I see is autocompletion support for clang itself.

It is only autocompletion help for clang, but the point is, if you
implement a similar flag for any program. If people did this
consistently, across the board, it would fix the problem, cleanly and
efficiently. You just add a --autocomplete flag to most programs so
that they can cooperate with the shell's autocompletion and then
everything is as friendly as TOPS-20 at last.

Perry
-- 
Perry E. Metzger		perry at piermont.com


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Other OSes?
  2018-07-09 12:52               ` clemc
  2018-07-09 13:06                 ` [COFF] PiDP Obsolesces Guaranteed clemc
@ 2018-07-09 14:39                 ` tytso
  2018-07-09 14:46                   ` clemc
  1 sibling, 1 reply; 61+ messages in thread
From: tytso @ 2018-07-09 14:39 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 4716 bytes --]

On Mon, Jul 09, 2018 at 08:52:00AM -0400, Clem Cole wrote:
> 
> rIght and the RIM loaded was printed on front panel of the console.  btw
> the disk was 19” in diamete.  Danny Klein has the original disk platter
> from the one of the original PDP-8 - before marriage it used to hang in his
> living room. FYI that was the system at the computer museum from the EE
> Dept after it died in approx ‘75  I was there the disk crashed.

I don't think it was 19" --- the DF32 was mounted in a 19" rack, yes.
But the platter was in an enclosure which was distinctly smaller than
the overall width of the DF32, and the platter was smaller still.  See
the pictures here[1] and here[2].

[1] https://www.pdp8.net/dfds32/dfds32.shtml 
[2] https://www.pdp8.net/dfds32/pics/df32diskorig.shtml?small

I once physically held the DF32 platter in my hands.  My dad and I
pulled it out of the enclosure, wiped it down with alcohol, looked at
both sides of the platter to see which was less scratched up, and put
the "better" side face down on top of the fixed heads, and then
screwed the platter back into place.  And it worked, afterwards, too!
You can't do that with today's HDD's!  :-)

> > Later PDP-8's would run more a sophisticated OS, such as OS/8, which
> > had a "Concise Command Language" (CCL) that was designed to be similar
> > to the TOPS-10 system running on the PDP-10.  OS/8 was a single-user
> > system, though; no time-sharing!
> 
> Be careful grasshopper.   TSS/8 is available on those web sites although I
> admit it does not run on my PiDP-8 and I have not figured why (Something is
> corrupt and I have not spent the time or energy to chase it).  Anyway,
> TSS/8. Supported 4-8 ASR-33 terminals, each had 4K words as you described
> before.  There was assembler, basic, focal, Fortran-IV and an Algol circa
> 1965 extensions.

I had forgotten about TSS/8.  OS/8 was indeed only single-user,
although apparently there was a multi-user BASIC interpreter which was
available as an option.  Our PDP-8/i only had 8k of core memory so
while in theory it was possible (barely) to run OS/8, we never did.
My knowledge of OS/8 was only from the manuals.  (And indeed, how I
first learned binary arithmatic, and programming in general, was via
thet Digital's "Introduction to Programming" book[1] which I inhaled
when I was maybe seven or eight.)

[1] https://web.archive.org/web/20051220132023/http://www.bitsavers.org:80/pdf/dec/pdp8/IntroToProgramming1969.pdf

TSS/8 was so far beyond the capabilities of the PDP-8 in my father's
lab that I never spent much time learning about it.  I think some of
the DECUS books we had referenced it, so I knew of its existence, but
not much more than that.

> RIght this is really the model for RT11 which would begat CP/M and last
> DOS-86 (aka PC-DOS, later renamed MS-DOS).

We had a PDP-11 at my computer lab in high school.  It was actually
running TSX-11 (the time-sharing extension of RT-11), so it could
support a dozen or so virtual instances of RT-11, where we learned
PDP-11 assembler in the advanced comp-sci class.

I remember two fun things about the TSX-11; the first was that when
you logged out of TSX-11, it would print the time used on the console,
and that was being printed by the underlying RT-11 --- so if you typed
control-S right at that point, it would lock up the whole system, and
all of the other users would be dead in the water.

The other fun thing was if you could get physical access to the
PDP-11, and brought the secondary disk off-line, and then forced a
reboot, RT-11 wouldn't be able to bring the TSX-11 system up fully,
and so the LA36 console would drop to a RT-11 command line prompt
without asking for a password.  This would allow you to run the
account editing program to set up a new privileged TSX-11 account.
(Basically, the equivalent of editing /etc/passwd and adding another
account with uid == 0.)

My knowledge of these facts was, of course, purely hypothetical.  :-)

In any case, that was why the first Linux FTP site in North America
was named "tsx-11.mit.edu"; it was a Vax VS3800 running Ultrix that
was sitting in my office when I was working at MIT as a full-time
staff member.  At first, tsx-11 was my personal workstation, but over
time it became a dedicated full-time server and migrated to larger and
more powerful machines; first a Dec Alpha running OSF/1, and later on,
a more powerful Intel server running Linux.  Shortly afterwards, I
started working at VA Linux Systems, and while tsx-11 was operating
for a while after that, after a while we shut it down since the
hardware was getting old and I no longer had access to the machine
room in MIT Building E40 where it lived.

	   	     	      	    	    - Ted


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Other OSes?
  2018-07-09 14:39                 ` [COFF] Other OSes? tytso
@ 2018-07-09 14:46                   ` clemc
  0 siblings, 0 replies; 61+ messages in thread
From: clemc @ 2018-07-09 14:46 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 5109 bytes --]

Your right it’s was called a 19” technology and physical platter was closer to 17 or 18 - we should ask Dan to measure it

Sent from my iPad

> On Jul 9, 2018, at 10:39 AM, Theodore Y. Ts'o <tytso at mit.edu> wrote:
> 
>> On Mon, Jul 09, 2018 at 08:52:00AM -0400, Clem Cole wrote:
>> 
>> rIght and the RIM loaded was printed on front panel of the console.  btw
>> the disk was 19” in diamete.  Danny Klein has the original disk platter
>> from the one of the original PDP-8 - before marriage it used to hang in his
>> living room. FYI that was the system at the computer museum from the EE
>> Dept after it died in approx ‘75  I was there the disk crashed.
> 
> I don't think it was 19" --- the DF32 was mounted in a 19" rack, yes.
> But the platter was in an enclosure which was distinctly smaller than
> the overall width of the DF32, and the platter was smaller still.  See
> the pictures here[1] and here[2].
> 
> [1] https://www.pdp8.net/dfds32/dfds32.shtml 
> [2] https://www.pdp8.net/dfds32/pics/df32diskorig.shtml?small
> 
> I once physically held the DF32 platter in my hands.  My dad and I
> pulled it out of the enclosure, wiped it down with alcohol, looked at
> both sides of the platter to see which was less scratched up, and put
> the "better" side face down on top of the fixed heads, and then
> screwed the platter back into place.  And it worked, afterwards, too!
> You can't do that with today's HDD's!  :-)
> 
>>> Later PDP-8's would run more a sophisticated OS, such as OS/8, which
>>> had a "Concise Command Language" (CCL) that was designed to be similar
>>> to the TOPS-10 system running on the PDP-10.  OS/8 was a single-user
>>> system, though; no time-sharing!
>> 
>> Be careful grasshopper.   TSS/8 is available on those web sites although I
>> admit it does not run on my PiDP-8 and I have not figured why (Something is
>> corrupt and I have not spent the time or energy to chase it).  Anyway,
>> TSS/8. Supported 4-8 ASR-33 terminals, each had 4K words as you described
>> before.  There was assembler, basic, focal, Fortran-IV and an Algol circa
>> 1965 extensions.
> 
> I had forgotten about TSS/8.  OS/8 was indeed only single-user,
> although apparently there was a multi-user BASIC interpreter which was
> available as an option.  Our PDP-8/i only had 8k of core memory so
> while in theory it was possible (barely) to run OS/8, we never did.
> My knowledge of OS/8 was only from the manuals.  (And indeed, how I
> first learned binary arithmatic, and programming in general, was via
> thet Digital's "Introduction to Programming" book[1] which I inhaled
> when I was maybe seven or eight.)
> 
> [1] https://web.archive.org/web/20051220132023/http://www.bitsavers.org:80/pdf/dec/pdp8/IntroToProgramming1969.pdf
> 
> TSS/8 was so far beyond the capabilities of the PDP-8 in my father's
> lab that I never spent much time learning about it.  I think some of
> the DECUS books we had referenced it, so I knew of its existence, but
> not much more than that.
> 
>> RIght this is really the model for RT11 which would begat CP/M and last
>> DOS-86 (aka PC-DOS, later renamed MS-DOS).
> 
> We had a PDP-11 at my computer lab in high school.  It was actually
> running TSX-11 (the time-sharing extension of RT-11), so it could
> support a dozen or so virtual instances of RT-11, where we learned
> PDP-11 assembler in the advanced comp-sci class.
> 
> I remember two fun things about the TSX-11; the first was that when
> you logged out of TSX-11, it would print the time used on the console,
> and that was being printed by the underlying RT-11 --- so if you typed
> control-S right at that point, it would lock up the whole system, and
> all of the other users would be dead in the water.
> 
> The other fun thing was if you could get physical access to the
> PDP-11, and brought the secondary disk off-line, and then forced a
> reboot, RT-11 wouldn't be able to bring the TSX-11 system up fully,
> and so the LA36 console would drop to a RT-11 command line prompt
> without asking for a password.  This would allow you to run the
> account editing program to set up a new privileged TSX-11 account.
> (Basically, the equivalent of editing /etc/passwd and adding another
> account with uid == 0.)
> 
> My knowledge of these facts was, of course, purely hypothetical.  :-)
> 
> In any case, that was why the first Linux FTP site in North America
> was named "tsx-11.mit.edu"; it was a Vax VS3800 running Ultrix that
> was sitting in my office when I was working at MIT as a full-time
> staff member.  At first, tsx-11 was my personal workstation, but over
> time it became a dedicated full-time server and migrated to larger and
> more powerful machines; first a Dec Alpha running OSF/1, and later on,
> a more powerful Intel server running Linux.  Shortly afterwards, I
> started working at VA Linux Systems, and while tsx-11 was operating
> for a while after that, after a while we shut it down since the
> hardware was getting old and I no longer had access to the machine
> room in MIT Building E40 where it lived.
> 
>                                          - Ted


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Other OSes?
  2018-07-09  2:44     ` crossd
@ 2018-07-10  5:30       ` bakul
  2018-07-16 14:49         ` crossd
  0 siblings, 1 reply; 61+ messages in thread
From: bakul @ 2018-07-10  5:30 UTC (permalink / raw)


On Sun, 08 Jul 2018 22:44:23 -0400 Dan Cross <crossd at gmail.com> wrote:
Dan Cross writes:
>
> On Sun, Jul 8, 2018 at 4:31 PM Perry E. Metzger <perry at piermont.com> wrote:
>
> > [snip]
> > On Wed, 4 Jul 2018 23:40:36 -0700 Bakul Shah <bakul at bitblocks.com>
> > wrote:
> > > - Capabilities (a number of OSes implemented them -- See Hank
> > > Levy's book: https://homes.cs.washington.edu/~levy/capabook/
> >
> > A fascinating resurrection of capabilities in a kind of pleasant
> > Unixy-way can be found in Robert Watson's "Capsicum" add-ons for
> > FreeBSD and Linux. I wish it was more widely known about and adopted.
> >
> > > - Namespaces (plan9)
> >
> > A good choice I think.
>
>
> Interestingly, it was in talking with Ben Laurie about a potential security
> model for Akaros that I realized the correspondence between Plan9-style
> namespaces and capabilities.

I tend to think they are orthogonal.  Namspaces map names to
objects -- they control what objects you can see.
Capabilities control what you can do with such an object. Cap
operations such as revoke, grant, attenunating rights
(subsetting the allowed ops) don't translate to namespaces.

A cap is much like a file descriptor but it doesn't have to be
limited to just open files or directories.

As an example, a client make invoke "read(fd, buffer, count)"
or "write(fd, buffer, count)". This call may be serviced by a
remote file server. Here "buffer" would be a cap on a memory
range within the client process -- we don't want the server to
have any more access to client's memory. When the fileserver
is ready to read or write, it has to securely arrange data
transfer to/from this memory range[1].

You can even think of a cap as a "promise" -- you can perform
certain operations in future as opposed to right now. [Rather
than read from memory and send the data along with the write()
call, the server can use the buffer cap in future. The file
server in turn can pass on the buffer cap to a storage server,
there by reducing latency and saving a copy or two]

>                              Since Akaros had imported a lot of plan9
> namespace code, we concluded that we already had the building blocks for a
> capability-style security model with minimal additional work. I
> subsequently concluded that Capsicum wasn't strong enough to be complete.

The point of my original comment was that if capabilities were
integral to a system (as opposed to being an add-on like
Capsicum) a much simpler Unix might've emerged. 

[1] In the old days we used copyin/copyout for this sort of
data transfer between a user process address space and the
kernel. This is analogous.


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Other OSes?
  2018-07-09  2:51     ` crossd
@ 2018-07-10  5:41       ` bakul
  0 siblings, 0 replies; 61+ messages in thread
From: bakul @ 2018-07-10  5:41 UTC (permalink / raw)


On Sun, 08 Jul 2018 22:51:00 -0400 Dan Cross <crossd at gmail.com> wrote:
>
> On Fri, Jul 6, 2018 at 1:43 AM Bakul Shah <bakul at bitblocks.com> wrote:
>
> > [snip some very interesting and insightful comments]
> > Mill ideas are very much worth exploring.  It will be possible
> > to build highly secure systems with it -- if it ever gets
> > sufficiently funded and built!  IMHO layers of mapping as with
> > virtualization/containerization are not really needed for
> > better security or isolation.
>
> Sure, with emphasis on that "if it ever gets sufficiently funded and
> built!" part. :-) It sounds cool, but what to do on extant hardware?
> Similarly with CHERI: they change nearly everything (including the
> hardware).

There is that!

Mill made me realize per process virtual address space can be
thrown out *without* compromising on security.  This can be a
win if you are building an N-core processor (for some large
N). Extant processor architectures are not going to make
efficient use of available gates for large N-core. And
mulitcore efforts such as Tilera don't seem to do anything re
security. This just seems like something worth experimenting
with.

> > 2. Is mmap() *really* the best we can do for mapping arbitrary resources
> > > into an address space?
> >
> > I think this is fine.  Even remote objects mmapping should
> > work!
> >
>
> Sure, but is it the *best* we can do? Subjectively, the interface is pretty
> ugly, and we're forced into a multi-level store. Maybe that's OK; it sure
> seems like we haven't come up with anything better. But I wonder whether
> that's because we've found some local maxima in our pursuit of
> functionality vs cost, or because we're so stuck in the model of
> multi-level stores and mapping objects into address spaces that we can't
> see beyond it. And it sure would be nice if the ergonomics of the
> programming interface were better.

I was using mmap as a generic term.  See my previous message
for an example -- read/write(fd, buffer, count). Here buffer
is a cap that can be used to map remote data into local addr
space.


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Other OSes?
  2018-07-10  5:30       ` bakul
@ 2018-07-16 14:49         ` crossd
  2018-07-16 16:59           ` [COFF] Capabilities (was " bakul
  0 siblings, 1 reply; 61+ messages in thread
From: crossd @ 2018-07-16 14:49 UTC (permalink / raw)


On Tue, Jul 10, 2018 at 1:30 AM Bakul Shah <bakul at bitblocks.com> wrote:

> On Sun, 08 Jul 2018 22:44:23 -0400 Dan Cross <crossd at gmail.com> wrote:
> Dan Cross writes:
> > On Sun, Jul 8, 2018 at 4:31 PM Perry E. Metzger <perry at piermont.com>
> wrote:
> >
> > > [snip]
> > > On Wed, 4 Jul 2018 23:40:36 -0700 Bakul Shah <bakul at bitblocks.com>
> > > wrote:
> > > > - Capabilities (a number of OSes implemented them -- See Hank
> > > > Levy's book: https://homes.cs.washington.edu/~levy/capabook/
> > >
> > > A fascinating resurrection of capabilities in a kind of pleasant
> > > Unixy-way can be found in Robert Watson's "Capsicum" add-ons for
> > > FreeBSD and Linux. I wish it was more widely known about and adopted.
> > >
> > > > - Namespaces (plan9)
> > >
> > > A good choice I think.
> >
> > Interestingly, it was in talking with Ben Laurie about a potential
> security
> > model for Akaros that I realized the correspondence between Plan9-style
> > namespaces and capabilities.
>
> I tend to think they are orthogonal.  Namspaces map names to
> objects -- they control what objects you can see.
>

In many ways, capabilities do the same thing: by not being able to name a
resource, I cannot access it.

_A_ way to think of plan9 style namespaces are as objects, with the names
of files exposed by a particular filesystem being operations on those
objects (I'm paraphrasing Russ Cox here, I think). In the plan9 world, most
useful resources are implemented as filesystems. In that sense, not only
does a particular namespace present a program with the set of resources it
can access, but it also defines what I can do with those resources.

Capabilities control what you can do with such an object. Cap
> operations such as revoke, grant, attenunating rights
> (subsetting the allowed ops) don't translate to namespaces.
>

I disagree. By way of example, I again reiterate the question I've asked
several times now: there's a capability in e.g. Capsicum to allow a program
to invoke the connect(2) system call: how does the capability system allow
me to control the 5 tuple one might connect(2) to? I gave an example where,
in the namespace world, I can interpose a proxy namespace that emulates the
networking filesystem and restrict what I can do with the network.

A cap is much like a file descriptor but it doesn't have to be
> limited to just open files or directories.
>
> As an example, a client make invoke "read(fd, buffer, count)"
> or "write(fd, buffer, count)". This call may be serviced by a
> remote file server. Here "buffer" would be a cap on a memory
> range within the client process -- we don't want the server to
> have any more access to client's memory. When the fileserver
> is ready to read or write, it has to securely arrange data
> transfer to/from this memory range[1].
>

So in other words, "buffer" is a name of an object and by being able to
name that object, I can do something useful with it, subject to the
limitations imposed by the object itself (e.g., `read(fd, buffer, count);`
will fault if `buffer` points to read-only memory).

You can even think of a cap as a "promise" -- you can perform
> certain operations in future as opposed to right now. [Rather
> than read from memory and send the data along with the write()
> call, the server can use the buffer cap in future. The file
> server in turn can pass on the buffer cap to a storage server,
> there by reducing latency and saving a copy or two]
>

Similarly to the way that the existence of a file name in a process's
namespace is in some ways a promise to be able to access some resource in
the future.

>                              Since Akaros had imported a lot of plan9
> > namespace code, we concluded that we already had the building blocks for
> a
> > capability-style security model with minimal additional work. I
> > subsequently concluded that Capsicum wasn't strong enough to be complete.
>
> The point of my original comment was that if capabilities were
> integral to a system (as opposed to being an add-on like
> Capsicum) a much simpler Unix might've emerged.
>
> [1] In the old days we used copyin/copyout for this sort of
> data transfer between a user process address space and the
> kernel. This is analogous.
>

It's an aside, but it's astonishing to see how that's been bastardized by
the use of ioctl() for such things in the Linux world.

        - Dan C.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/coff/attachments/20180716/080c6634/attachment.html>


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [COFF] Capabilities (was Re:  Other OSes?
  2018-07-16 14:49         ` crossd
@ 2018-07-16 16:59           ` bakul
  0 siblings, 0 replies; 61+ messages in thread
From: bakul @ 2018-07-16 16:59 UTC (permalink / raw)


On Mon, 16 Jul 2018 10:49:52 -0400 Dan Cross <crossd at gmail.com> wrote:
>
> On Tue, Jul 10, 2018 at 1:30 AM Bakul Shah <bakul at bitblocks.com> wrote:
>
> > I tend to think they are orthogonal.  Namspaces map names to
> > objects -- they control what objects you can see.
> >
>
> In many ways, capabilities do the same thing: by not being able to name a
> resource, I cannot access it.
>
> _A_ way to think of plan9 style namespaces are as objects, with the names
> of files exposed by a particular filesystem being operations on those
> objects (I'm paraphrasing Russ Cox here, I think). In the plan9 world, most
> useful resources are implemented as filesystems. In that sense, not only
> does a particular namespace present a program with the set of resources it
> can access, but it also defines what I can do with those resources.
>
> Capabilities control what you can do with such an object. Cap
> > operations such as revoke, grant, attenunating rights
> > (subsetting the allowed ops) don't translate to namespaces.
> >
>
> I disagree. By way of example, I again reiterate the question I've asked
> several times now: there's a capability in e.g. Capsicum to allow a program
> to invoke the connect(2) system call: how does the capability system allow
> me to control the 5 tuple one might connect(2) to? I gave an example where,
> in the namespace world, I can interpose a proxy namespace that emulates the
> networking filesystem and restrict what I can do with the network.

You'd do something like "networkStackCap.connect(5-tuple)" where
"networkStackCap" is a capability to a network stack service (or
object) you were given. It could a proxy server or a NAT server
that rewrites things, or a remote handle on a service on another
node. A client starts with some given caps. It can gain new caps
only through operations on its existing caps. Whether you represent
the 5-tupe a path string or a tuple of 5 numbers or something else
is upto how the network stack expects it.

> A cap is much like a file descriptor but it doesn't have to be
> > limited to just open files or directories.
> >
> > As an example, a client make invoke "read(fd, buffer, count)"
> > or "write(fd, buffer, count)". This call may be serviced by a
> > remote file server. Here "buffer" would be a cap on a memory
> > range within the client process -- we don't want the server to
> > have any more access to client's memory. When the fileserver
> > is ready to read or write, it has to securely arrange data
> > transfer to/from this memory range[1].
> >
>
> So in other words, "buffer" is a name of an object and by being able to
> name that object, I can do something useful with it, subject to the
> limitations imposed by the object itself (e.g., `read(fd, buffer, count);`
> will fault if `buffer` points to read-only memory).

Right but then in order to access contents of this named
buffer object you need to pass it another named buffer
object....  Where is the bottom turtle? In Unix/plan9 world
buffer is strictly local and you have to play games (such as
copyin/copyout or "meltdown" friendly mapping).

One way to compare them is this: imagine in a game of
adventure you are exploring a place with many passages and
rooms. In the cap world some of these rooms have a lock and
you need the right key to access them (and in turn they may
lead to other locked or open rooms). You can only access those
rooms for which you were given keys when you started or the
keys you found during your exploration.

In the plan9 world you can't even see the door of a room if
you are not allowed access it. You need to read some magic
scroll (mount) and new doors would magically appear! And there
are some global objects that anyone can access (e.g. #e,
#c, #k, #M, #s etc.). In contrast cap clients are inherently
sandboxed. They can't tell if it is Live or it is Memorex. 

plan9 / unix filesystems control access to a collection of
objects. You still need rwxrwxrwx mode bits, which are not
finegrained enough -- the same for each group of users.

Caps can control access to individual objects and you can
subset this access (e.g. the equivalent of a valet key that
don't allow access to the glove box or trunk, or a trunk key
that doesn't allow driving a car). You can even revoke access
(i.g. change the locks). None of these map to namespaces
without adding much more complication.

Even plan9's namespaces are rather expensive in practice.  May
be because they evolved from unix but for an arbitray objects
things like owner, group, access and modification times etc.
don't really matter.

> > [1] In the old days we used copyin/copyout for this sort of
> > data transfer between a user process address space and the
> > kernel. This is analogous.
> >
>
> It's an aside, but it's astonishing to see how that's been bastardized by
> the use of ioctl() for such things in the Linux world.

The unix filesystem abstraction is very good but it doesn't
cover some uses hence it has become a leaky abstraction. In
plan9 world you have ctl files but commands they accept are
still arbitrary (specific to the object).

The great invention of unix was a set of a few abstractions
that served extremely well for a majority of tasks. These can
be used with caps.


^ permalink raw reply	[flat|nested] 61+ messages in thread

end of thread, other threads:[~2018-07-16 16:59 UTC | newest]

Thread overview: 61+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-07-05  5:56 [COFF] Other OSes? wkt
2018-07-05  6:29 ` spedraja
2018-07-05  6:40 ` bakul
2018-07-05 15:23   ` clemc
2018-07-05 20:49     ` scj
2018-07-05 21:25       ` david
2018-07-06 15:42         ` gtaylor
2018-07-05 22:38       ` ralph
2018-07-05 23:11       ` bakul
2018-07-06  0:06         ` lm
2018-07-06 15:49           ` gtaylor
2018-07-06  0:52       ` tytso
2018-07-06  5:59         ` ralph
2018-07-06 15:59           ` gtaylor
2018-07-06 16:10             ` ralph
2018-07-06 16:47               ` gtaylor
2018-07-06 15:57         ` gtaylor
2018-07-06 15:38       ` gtaylor
2018-07-09  1:56         ` tytso
2018-07-09  3:25           ` gtaylor
2018-07-09  3:35             ` crossd
2018-07-09  3:43               ` gtaylor
2018-07-09  3:52                 ` imp
2018-07-09 11:32                   ` perry
2018-07-09 11:50                     ` perry
2018-07-09 11:34                 ` crossd
2018-07-09  5:23             ` tytso
2018-07-09 12:52               ` clemc
2018-07-09 13:06                 ` [COFF] PiDP Obsolesces Guaranteed clemc
2018-07-09 14:39                 ` [COFF] Other OSes? tytso
2018-07-09 14:46                   ` clemc
2018-07-09 11:24           ` perry
2018-07-05 22:51   ` ewayte
2018-07-08 20:31   ` perry
2018-07-08 20:53     ` perry
2018-07-09  2:44     ` crossd
2018-07-10  5:30       ` bakul
2018-07-16 14:49         ` crossd
2018-07-16 16:59           ` [COFF] Capabilities (was " bakul
2018-07-06  0:55 ` [COFF] " crossd
2018-07-06  5:42   ` bakul
2018-07-09  2:51     ` crossd
2018-07-10  5:41       ` bakul
2018-07-06  4:04 ` grog
2018-07-06 16:10   ` gtaylor
2018-07-06 18:27     ` [COFF] Editor Scripts scj
2018-07-06 19:04       ` gtaylor
2018-07-08 20:50 ` [COFF] Other OSes? perry
2018-07-08 23:27   ` bakul
2018-07-09  0:00     ` grog
2018-07-09  0:13       ` perry
2018-07-09  0:05     ` crossd
2018-07-09  0:56       ` lm
2018-07-09  2:23         ` crossd
2018-07-09  0:11     ` perry
2018-07-09  0:19       ` crossd
2018-07-09  2:00         ` bakul
2018-07-09  3:02           ` [COFF] Origination of awful security design [COFF, COFF] bill
2018-07-09 13:10           ` [COFF] Other OSes? david
2018-07-09 13:17           ` perry
2018-07-09 13:13         ` perry

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).