* the splits
@ 2024-05-21 17:42 Ray Andrews
2024-05-22 1:48 ` Mark J. Reed
0 siblings, 1 reply; 12+ messages in thread
From: Ray Andrews @ 2024-05-21 17:42 UTC (permalink / raw)
To: Zsh Users
[-- Attachment #1: Type: text/plain, Size: 1327 bytes --]
A function of mine might need to unmount someting. If there's a problem
I try to be helpful:
umount -v "$mountpoint" ||
{
# Helpful diagnostics if partition won't unmount:
echo "\nCan't unmount $mountpoint. Is a terminal logged on? Or is
it one of these programs:?\n(Please wait or press '^C' to quit.)"
abc=$(lsof | grep $mountpoint)
abc=( ${(f)abc} )
for def in $abc[@]; do ghi=( ${=def} ); print --
"$ghi[1]\t$ghi[-1]\n"; done
return
}
Typical run:
% mnt ,U sda
Unmounting partitions ...
umount: /mnt/sda/1: target is busy.
Can't unmount /mnt/sda/1. Is a terminal logged on? Or is it one of
these programs:?
(Please wait or press '^C' to quit.)
zsh /mnt/sda/1/EFI/BOOT
geany /mnt/sda/1/EFI/BOOT
geany /mnt/sda/1/EFI/BOOT
geany /mnt/sda/1/EFI/BOOT
----------------------------------------------------
... works fine but I'll bet:
abc=$(lsof | grep $mountpoint)
abc=( ${(f)abc} )
for def in $abc[@]; do ghi=( ${=def} ); print -- "$ghi[1]\t$ghi[-1]\n"; done
... is belabored. Can that be streamlined? As always my splitting is
a problem. I need to process line by line, but then word by word so as
to grab just the first and last words from 'lsof' output. I'll bet
Roman can do all of the above in 20 characters.
[-- Attachment #2: Type: text/html, Size: 2149 bytes --]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: the splits
2024-05-21 17:42 the splits Ray Andrews
@ 2024-05-22 1:48 ` Mark J. Reed
2024-05-22 2:21 ` Ray Andrews
0 siblings, 1 reply; 12+ messages in thread
From: Mark J. Reed @ 2024-05-22 1:48 UTC (permalink / raw)
To: Ray Andrews; +Cc: Zsh Users
[-- Attachment #1: Type: text/plain, Size: 1829 bytes --]
Running *lsof* *| grep* seems a bit silly. Can't you just do *lsof
$mountpoint*?
When I want to select columns I usually reach for *awk*:
*sudo lsof $mountpoint | awk '{print $1, $NF}'*
No need to read all text into a variable when you can just send it straight
to the screen.
On Tue, May 21, 2024 at 1:43 PM Ray Andrews <rayandrews@eastlink.ca> wrote:
> A function of mine might need to unmount someting. If there's a problem I
> try to be helpful:
>
>
> umount -v "$mountpoint" ||
> {
> # Helpful diagnostics if partition won't unmount:
> echo "\nCan't unmount $mountpoint. Is a terminal logged on? Or is it
> one of these programs:?\n(Please wait or press '^C' to quit.)"
>
> abc=$(lsof | grep $mountpoint)
> abc=( ${(f)abc} )
> for def in $abc[@]; do ghi=( ${=def} ); print --
> "$ghi[1]\t$ghi[-1]\n"; done
> return
> }
>
> Typical run:
>
>
> % mnt ,U sda
>
> Unmounting partitions ...
> umount: /mnt/sda/1: target is busy.
>
> Can't unmount /mnt/sda/1. Is a terminal logged on? Or is it one of these
> programs:?
> (Please wait or press '^C' to quit.)
>
> zsh /mnt/sda/1/EFI/BOOT
> geany /mnt/sda/1/EFI/BOOT
> geany /mnt/sda/1/EFI/BOOT
> geany /mnt/sda/1/EFI/BOOT
>
> ----------------------------------------------------
>
> ... works fine but I'll bet:
>
> abc=$(lsof | grep $mountpoint)
> abc=( ${(f)abc} )
> for def in $abc[@]; do ghi=( ${=def} ); print -- "$ghi[1]\t$ghi[-1]\n";
> done
>
> ... is belabored. Can that be streamlined? As always my splitting is a
> problem. I need to process line by line, but then word by word so as to
> grab just the first and last words from 'lsof' output. I'll bet Roman can
> do all of the above in 20 characters.
>
>
>
--
Mark J. Reed <markjreed@gmail.com>
[-- Attachment #2: Type: text/html, Size: 3290 bytes --]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: the splits
2024-05-22 1:48 ` Mark J. Reed
@ 2024-05-22 2:21 ` Ray Andrews
2024-05-22 5:56 ` Roman Perepelitsa
2024-05-22 5:57 ` Jim
0 siblings, 2 replies; 12+ messages in thread
From: Ray Andrews @ 2024-05-22 2:21 UTC (permalink / raw)
To: zsh-users
[-- Attachment #1: Type: text/plain, Size: 732 bytes --]
On 2024-05-21 18:48, Mark J. Reed wrote:
> Running *lsof* *| grep* seems a bit silly. Can't you just do *lsof
> $mountpoint*?
>
> When I want to select columns I usually reach for *awk*:
>
> *sudo lsof $mountpoint | awk '{print $1, $NF}'*
>
>
Beautiful, nuts' I just presumed I needed to grep for that. Much faster
your way. As for awk, I don't know anything about it, but googling for
help on various issues, one sees awk coming to the rescue all the time.
I half way learned sed, but I think I should have learned awk. One
little thing, can I have the first and last columns, but with a tab
between, or some other columnizer?:
COMMAND NAME
zsh /mnt/sda/5/boot
geany /mnt/sda/5/boot
[-- Attachment #2: Type: text/html, Size: 1557 bytes --]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: the splits
2024-05-22 2:21 ` Ray Andrews
@ 2024-05-22 5:56 ` Roman Perepelitsa
2024-05-22 19:25 ` Ray Andrews
2024-05-22 5:57 ` Jim
1 sibling, 1 reply; 12+ messages in thread
From: Roman Perepelitsa @ 2024-05-22 5:56 UTC (permalink / raw)
To: Ray Andrews; +Cc: zsh-users
On Wed, May 22, 2024 at 4:21 AM Ray Andrews <rayandrews@eastlink.ca> wrote:
>
> On 2024-05-21 18:48, Mark J. Reed wrote:
>
> > When I want to select columns I usually reach for awk:
> >
> > sudo lsof $mountpoint | awk '{print $1, $NF}'
>
> One little thing, can I have the first and last columns, but with a
> tab between, or some other columnizer?
Tab instead of space:
awk '{print $1 "\t" $NF}'
Left-justified first column:
awk '{printf "%-16s\t%s\n", $1, $NF}'
Columnized output:
awk '{print $1, $NF}' | column -t
`column -t` is easy to use and produces nicely aligned output. The
downside is that it buffers all input, so it's not always applicable.
Roman.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: the splits
2024-05-22 2:21 ` Ray Andrews
2024-05-22 5:56 ` Roman Perepelitsa
@ 2024-05-22 5:57 ` Jim
1 sibling, 0 replies; 12+ messages in thread
From: Jim @ 2024-05-22 5:57 UTC (permalink / raw)
To: Ray Andrews; +Cc: zsh-users
[-- Attachment #1: Type: text/plain, Size: 1135 bytes --]
Ray,
On Tue, May 21, 2024 at 9:21 PM Ray Andrews <rayandrews@eastlink.ca> wrote:
>
> On 2024-05-21 18:48, Mark J. Reed wrote:
>
> Running *lsof* *| grep* seems a bit silly. Can't you just do *lsof
> $mountpoint*?
>
> When I want to select columns I usually reach for *awk*:
>
> *sudo lsof $mountpoint | awk '{print $1, $NF}'*
>
>
> Beautiful, nuts' I just presumed I needed to grep for that. Much faster
> your way. As for awk, I don't know anything about it, but googling for
> help on various issues, one sees awk coming to the rescue all the time. I
> half way learned sed, but I think I should have learned awk. One little
> thing, can I have the first and last columns, but with a tab between, or
> some other columnizer?:
>
> COMMAND NAME
> zsh /mnt/sda/5/boot
> geany /mnt/sda/5/boot
>
Maybe something like this would work for you. Except for lsof it is all
shell code.
L=15 # insure first field reserves L columns (adjust as needed)
lsof $mountpoint | \
while read -r line ; do
print -- ${(r.L.. .)${=line}[1]} ${${=line}[-1]}
done
Regards,
Jim Murphy
[-- Attachment #2: Type: text/html, Size: 2108 bytes --]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: the splits
2024-05-22 5:56 ` Roman Perepelitsa
@ 2024-05-22 19:25 ` Ray Andrews
2024-05-22 20:41 ` Lawrence Velázquez
0 siblings, 1 reply; 12+ messages in thread
From: Ray Andrews @ 2024-05-22 19:25 UTC (permalink / raw)
To: zsh-users
[-- Attachment #1: Type: text/plain, Size: 577 bytes --]
On 2024-05-21 22:56, Roman Perepelitsa wrote:
> One little thing, can I have the first and last columns, but with a
>> tab between, or some other columnizer?
> Tab instead of space:
>
> awk '{print $1 "\t" $NF}'
>
> Left-justified first column:
>
> awk '{printf "%-16s\t%s\n", $1, $NF}'
>
> Columnized output:
>
> awk '{print $1, $NF}' | column -t
>
> `column -t` is easy to use and produces nicely aligned output. The
> downside is that it buffers all input, so it's not always applicable.
>
> Roman.
Gotta learn awk! Nuts, made the wrong choice going with sed.
[-- Attachment #2: Type: text/html, Size: 1277 bytes --]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: the splits
2024-05-22 19:25 ` Ray Andrews
@ 2024-05-22 20:41 ` Lawrence Velázquez
2024-05-22 21:36 ` Ray Andrews
0 siblings, 1 reply; 12+ messages in thread
From: Lawrence Velázquez @ 2024-05-22 20:41 UTC (permalink / raw)
To: zsh-users
On Wed, May 22, 2024, at 3:25 PM, Ray Andrews wrote:
> Gotta learn awk! Nuts, made the wrong choice going with sed.
No. Although awk is more powerful than sed, they each have their
own strengths and suitable use cases [1]. It'd be pretty goofy to
do this:
awk '/foo/ { gsub(/bar/, "baz") }; 1'
instead of this:
sed /foo/s/bar/baz/g
You're hamstringing yourself with this "learn one tool" mindset.
Familiarity with a variety of tools helps you approach problems
from different perspectives and avoid the law of the instrument
[2][3].
[1]: https://mywiki.wooledge.org/BashProgramming/02#Strength_reduction
[2]: https://en.wikipedia.org/wiki/Law_of_the_instrument
[3]: https://eev.ee/blog/2011/04/17/architectural-fallacies/#maslows-hammer
--
vq
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: the splits
2024-05-22 20:41 ` Lawrence Velázquez
@ 2024-05-22 21:36 ` Ray Andrews
2024-05-23 0:04 ` Lawrence Velázquez
0 siblings, 1 reply; 12+ messages in thread
From: Ray Andrews @ 2024-05-22 21:36 UTC (permalink / raw)
To: zsh-users
[-- Attachment #1: Type: text/plain, Size: 674 bytes --]
On 2024-05-22 13:41, Lawrence Velázquez wrote:
> You're hamstringing yourself with this "learn one tool" mindset.
> Familiarity with a variety of tools helps you approach problems
> from different perspectives and avoid the law of the instrument
>
Of course. Yes, different tools for different jobs. What I meant was
that it terms of priority awk might have given me more bang for the buck
as the first tool to learn. Mind, sed is simpler and that's why I went
that way. Tx. for links. 'Strength reduction' is something I believe
in instinctively. I always want to do things in native zsh code if
possible. External tools have to justify their weight.
[-- Attachment #2: Type: text/html, Size: 1130 bytes --]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: the splits
2024-05-22 21:36 ` Ray Andrews
@ 2024-05-23 0:04 ` Lawrence Velázquez
2024-05-23 0:30 ` Ray Andrews
0 siblings, 1 reply; 12+ messages in thread
From: Lawrence Velázquez @ 2024-05-23 0:04 UTC (permalink / raw)
To: zsh-users
On Wed, May 22, 2024, at 5:36 PM, Ray Andrews wrote:
> Of course. Yes, different tools for different jobs. What I meant was
> that it terms of priority awk might have given me more bang for the
> buck as the first tool to learn. Mind, sed is simpler and that's why I
> went that way.
Yes, it's quite common -- borderline universal -- to start with sed
and pick up awk later.
> Tx. for links. 'Strength reduction' is something I
> believe in instinctively. I always want to do things in native zsh
> code if possible. External tools have to justify their weight.
You take this way too far, in my opinion. Compared to appropriate
external tools, zsh-heavy solutions often perform poorly and are
more difficult to understand and maintain. (Compare your original
code to the awk solutions Mark and Roman offered.) Most of your
code would improve if you (judiciously) used more external utilities
and less zsh.
(For example, when you find yourself doing this:
output=$(cmd)
lines=(${(f)output})
for line in $lines
do
modify_line_and_print_it
done
you should step back and consider ''cmd | sed'' or ''cmd | awk''
instead, especially if cmd might output more than a couple of
lines.)
--
vq
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: the splits
2024-05-23 0:04 ` Lawrence Velázquez
@ 2024-05-23 0:30 ` Ray Andrews
2024-05-23 2:31 ` Mark J. Reed
0 siblings, 1 reply; 12+ messages in thread
From: Ray Andrews @ 2024-05-23 0:30 UTC (permalink / raw)
To: zsh-users
[-- Attachment #1: Type: text/plain, Size: 1095 bytes --]
On 2024-05-22 17:04, Lawrence Velázquez wrote:
> Yes, it's quite common -- borderline universal -- to start with sed
> and pick up awk later.
Good to know. Sometimes I think I get everything wrong.
> You take this way too far, in my opinion. Compared to appropriate
> external tools, zsh-heavy solutions often perform poorly and are
> more difficult to understand and maintain. (Compare your original
> code to the awk solutions Mark and Roman offered.) Most of your
> code would improve if you (judiciously) used more external utilities
> and less zsh.
It seems 'obvious' that internal code would be faster, but I know from
Roman's various tests over the years that it ain't necessarily so.
Besides, at the concept level, shells are intended as glue between
system commands and all their internal abilities are addons. I suppose
when, as you say, one replaces a multi line internal construction with
a single line construction that calls an external prog. the mere fact of
many lines to interpret has a penalty right there that you'd not notice
in a compiled program.
[-- Attachment #2: Type: text/html, Size: 1778 bytes --]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: the splits
2024-05-23 0:30 ` Ray Andrews
@ 2024-05-23 2:31 ` Mark J. Reed
2024-05-23 12:58 ` Ray Andrews
0 siblings, 1 reply; 12+ messages in thread
From: Mark J. Reed @ 2024-05-23 2:31 UTC (permalink / raw)
To: Ray Andrews; +Cc: zsh-users
[-- Attachment #1: Type: text/plain, Size: 2348 bytes --]
The shell is fundamentally an interpreter; stuff written in shell can't
possibly be as fast as the code that is interpreting it. In general, native
tools are going to be more efficient.
I mean, if you had to write code in your awk program to do the parsing and
splitting that it does automatically, that would be slow, too. But you
don't, because it's it's already written for you inside the awk binary, and
in compiled C rather than interpreted awk.
In college I wrote a complete email-based helpdesk/workflow system for my
team of sysadmins, and I prided myself on doing so entirely in pure ksh,
like the Rand MH reimplementation in the back of Korn's book. I definitely
went too far in the direction of avoiding external tools, and have since
corrected.
As I see it, the shell is best utilized as commander and coordinator rather
than actually doing the hands-on nitty-gritty work; that's better delegated
to more efficient (and usually more specialized) tools. The shell can
absolutely do it, but it won't be the best application of the available
resources.
_
Mark J. Reed <markjreed@gmail.com>
On Wed, May 22, 2024 at 20:30 Ray Andrews <rayandrews@eastlink.ca> wrote:
>
>
> On 2024-05-22 17:04, Lawrence Velázquez wrote:
>
> Yes, it's quite common -- borderline universal -- to start with sed
> and pick up awk later.
>
> Good to know. Sometimes I think I get everything wrong.
>
> You take this way too far, in my opinion. Compared to appropriate
> external tools, zsh-heavy solutions often perform poorly and are
> more difficult to understand and maintain. (Compare your original
> code to the awk solutions Mark and Roman offered.) Most of your
> code would improve if you (judiciously) used more external utilities
> and less zsh.
>
> It seems 'obvious' that internal code would be faster, but I know from
> Roman's various tests over the years that it ain't necessarily so.
> Besides, at the concept level, shells are intended as glue between system
> commands and all their internal abilities are addons. I suppose when, as
> you say, one replaces a multi line internal construction with a single line
> construction that calls an external prog. the mere fact of many lines to
> interpret has a penalty right there that you'd not notice in a compiled
> program.
>
>
>
[-- Attachment #2: Type: text/html, Size: 3460 bytes --]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: the splits
2024-05-23 2:31 ` Mark J. Reed
@ 2024-05-23 12:58 ` Ray Andrews
0 siblings, 0 replies; 12+ messages in thread
From: Ray Andrews @ 2024-05-23 12:58 UTC (permalink / raw)
To: zsh-users
On 2024-05-22 19:31, Mark J. Reed wrote:
> The shell is fundamentally an interpreter; stuff written in shell
> can't possibly be as fast as the code that is interpreting it. In
> general, native tools are going to be more efficient.
>
> ...
> As I see it, the shell is best utilized as commander and coordinator
> rather than actually doing the hands-on nitty-gritty work; that's
> better delegated to more efficient (and usually more specialized)
> tools. The shell can absolutely do it, but it won't be the best
> application of the available resources.
Makes sense. I understand that the more time spent interpreting (vs.
spent in compiled code), the slower. OTOH there's Lawrence's 'weakest
tool' rule. Just grabbing a subscript in zsh wouldn't be expected to
take very long. Dunno, I'd have to be able to look under the hood to
really see which attitude prevails in any given situation. Besides, in
my original post, speed is nearly irrelevant, it's an error msg that I
just want to look pretty. Mind ... getting rid of the 'grep', as you
first showed, really did make a noticeable difference. What I was
looking for was the most 'direct' -- really the most legible --
solution. I value clarity above all. Nice tho to be able to discuss
elegance and fine tuning, vs. getting something to work at all -- which
is more normal for me!
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2024-05-23 12:59 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-05-21 17:42 the splits Ray Andrews
2024-05-22 1:48 ` Mark J. Reed
2024-05-22 2:21 ` Ray Andrews
2024-05-22 5:56 ` Roman Perepelitsa
2024-05-22 19:25 ` Ray Andrews
2024-05-22 20:41 ` Lawrence Velázquez
2024-05-22 21:36 ` Ray Andrews
2024-05-23 0:04 ` Lawrence Velázquez
2024-05-23 0:30 ` Ray Andrews
2024-05-23 2:31 ` Mark J. Reed
2024-05-23 12:58 ` Ray Andrews
2024-05-22 5:57 ` Jim
Code repositories for project(s) associated with this public inbox
https://git.vuxu.org/mirror/zsh/
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).