The Unix Heritage Society mailing list
 help / color / mirror / Atom feed
From: Dan Cross <crossd@gmail.com>
To: "Greg 'groggy' Lehey" <grog@lemis.com>
Cc: The Eunuchs Hysterical Society <tuhs@tuhs.org>
Subject: Re: [TUHS] Command line options and complexity
Date: Wed, 11 Mar 2020 19:14:32 -0400	[thread overview]
Message-ID: <CAEoi9W5N8mQBfNgmugVdHq9jH7k58Az49TuR3wcO2nB-nKGrnQ@mail.gmail.com> (raw)
In-Reply-To: <20200311225638.GG89512@eureka.lemis.com>

[-- Attachment #1: Type: text/plain, Size: 3408 bytes --]

On Wed, Mar 11, 2020 at 6:57 PM Greg 'groggy' Lehey <grog@lemis.com> wrote:

> On Wednesday, 11 March 2020 at 14:18:08 +1100, Dave Horsfall wrote:
> >
> > The "ls" command for example really needs an option-ectomy; I find that I
> > don't really care about the exact number of bytes there are in a file as
> > the nearest KiB or MiB (or even GiB) is usually good enough, so I'd be
> > happy if "-h" was the default with some way to turn it off (yes, I know
> > that it's occasionally useful to add them all up in a column, but that
> > won't tell you how many media blocks are required).
>
> A good example.  But you're not removing options, you're just
> redefining them.  In fact I find the -h option particularly emetic, so
> a better choice in removing options would be to remove -h and use a
> filter to mutilate the sizes:
>
>   $ ls -l | humanize
>
> But that's a pain, isn't it?


I don't know; that's subjective.


> That's why there's a -h option for
> people who like it.


That's incomplete, in that it implies that an option is the only way to
achieve the goal of reducing the perceived pain, but that's not the case.
(Note I'm not saying you intended that as an interpretation, but it's a
reasonable intuition for an intention.)

An interesting counterpoint to this argument is how columnized "ls" is
handled under Plan 9: there is no `-C` option to `ls` there; instead,
there's a general-purpose `mc` filter that figures out the size of the
window it's running in, reads its input, decides how many columns the input
will fit into, and emits it columnized. But yes, it would be a pain to type
`ls | mc` every time one wanted columnized `ls` output, so this is wrapped
up into a shell script called `lc`. Note that this lets you do stuff like,
`lc -l` and see multi-column long listings if the window is wide enough.

I got so used to this from plan9 that I keep an approximation in
$HOME/bin/lc: `exec ls -ACF "$@"`.

For the `humanize` thing, I don't see why one couldn't have an `lh` command
that generated "human-friendly long output from ls."


> Note that you can't do it the other way round:
> you can't get the exact size from -h output.
>

That's true, but now the logic is specialized to ls, and not applicable to
anything else (e.g., du? df? wc, perhaps?). Similarly with `-,`. It is not
general purpose, which is unfortunate.

Granted, combining these things would be a little challenging, but is it
likely that one would want `ls -l,h`? Optimize for the common case, etc....

And then there's the question why you don't like the standard output.
> Because the number strings are too long and difficult to read, maybe?
> That's the rationale for the -, option.
>
> > Quickly now, without looking: which option shows unprintable
> > characters in a filename?  Unless you use it regularly (in which
> > case you have real problems) you would have to look it up; I find
> > that "ls ... | od -bc" to be quicker, especially on filenames with
> > trailing blanks etc (which "-B" won't show).
>
> This is arguably a bug in the -B option.  I certainly don't think the
> pipe notation is quicker.  But it's nice to have both alternatives.


By default, plan9 would quote filenames that had characters that were
special to the shell (there wasn't really the concept of "non-printable
characters in the Unix/TTY sense); this could be disabled by specifying the
`-Q` option.

        - Dan C.

[-- Attachment #2: Type: text/html, Size: 4662 bytes --]

  reply	other threads:[~2020-03-11 23:15 UTC|newest]

Thread overview: 68+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-03-03 18:15 Jon Steinhart
2020-03-03 18:44 ` Adam Thornton
2020-03-04  4:11   ` Tyler Adams
2020-03-04  6:03     ` Dave Horsfall
2020-03-04  6:48       ` arnold
2020-03-04 21:17         ` Dave Horsfall
2020-03-05  0:49         ` Lyndon Nerenberg
2020-03-05 20:54           ` Dave Horsfall
2020-03-05 22:01             ` William Cheswick
2020-03-04 21:50   ` Random832
2020-03-04 23:19     ` Steffen Nurpmeso
2020-03-05  6:12     ` Alan D. Salewski
2020-03-04 22:03   ` Random832
2020-03-04 23:25     ` Terry Jones
2020-03-10 23:03 ` Dan Stromberg
2020-03-11  3:18   ` Dave Horsfall
2020-03-11  4:02     ` Steve Nickolas
2020-03-11 22:56     ` Greg 'groggy' Lehey
2020-03-11 23:14       ` Dan Cross [this message]
2020-03-12  0:42         ` Greg 'groggy' Lehey
2020-03-12  0:53       ` Steve Nickolas
2020-03-12  3:09         ` Greg 'groggy' Lehey
2020-03-12  3:34           ` Steve Nickolas
2020-03-13  1:02             ` Greg 'groggy' Lehey
2020-03-12  5:38         ` Dave Horsfall
2020-03-12  6:48         ` Peter Jeremy
2020-03-12  7:37           ` Steve Nickolas
2020-03-12  7:42             ` Warner Losh
2020-03-12 23:57           ` Greg 'groggy' Lehey
2020-03-12  5:22       ` Dave Horsfall
2020-03-12  5:35         ` Steve Nickolas
2020-03-13  0:36         ` Greg 'groggy' Lehey
2020-03-13 11:26           ` Dave Horsfall
2020-03-14  2:13           ` Greg A. Woods
2020-03-14  4:31             ` Greg 'groggy' Lehey
2020-03-04 14:06 Nelson H. F. Beebe
2020-03-04 16:17 ` John P. Linderman
2020-03-04 17:25   ` Bakul Shah
2020-03-05  0:55   ` Rob Pike
2020-03-05  2:05   ` Kurt H Maier
2020-03-05  4:17     ` Ken Thompson via TUHS
2020-03-05 14:53       ` Dan Cross
2020-03-05 21:50       ` Dave Horsfall
2020-03-05 21:56         ` Warner Losh
2020-03-08  5:26           ` Greg 'groggy' Lehey
2020-03-08  5:32             ` Jon Steinhart
2020-03-08  9:30               ` Tyler Adams
     [not found]                 ` <CAC0cEp8eFRkkLTw88WVaKZoKy+qsrhuC8LkzmmsbqtdZgMf8eQ@mail.gmail.com>
     [not found]                   ` <CAEuQd1D7+dfap98AwPo2W41+06prrcVaAWk3Ve-ve0uQ0xBu3Q@mail.gmail.com>
2020-03-09 21:06                     ` John P. Linderman
2020-03-09 21:22                       ` Kurt H Maier
2020-03-11 17:41                         ` John P. Linderman
2020-03-11 21:29                           ` Warner Losh
2020-03-12  0:13                             ` John P. Linderman
2020-03-12  0:34                               ` Chet Ramey
2020-03-12 12:57                             ` John P. Linderman
2020-03-12 19:24                               ` Steffen Nurpmeso
2020-03-08  9:51             ` Michael Kjörling
2020-03-05  4:57 Doug McIlroy
2020-03-05 22:17 ` Diomidis Spinellis
2020-03-10 16:15 Doug McIlroy
2020-03-10 17:38 ` Dan Cross
2020-03-10 17:44   ` Bakul Shah
2020-03-10 18:09     ` Dan Cross
2020-03-10 18:42 Doug McIlroy
2020-03-10 19:38 ` Dan Cross
2020-03-13 10:45 Dave Horsfall
2020-03-14  4:35 ` Greg 'groggy' Lehey
2020-03-14 19:52   ` John P. Linderman
2020-03-14 20:25     ` Steffen Nurpmeso

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAEoi9W5N8mQBfNgmugVdHq9jH7k58Az49TuR3wcO2nB-nKGrnQ@mail.gmail.com \
    --to=crossd@gmail.com \
    --cc=grog@lemis.com \
    --cc=tuhs@tuhs.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).