From: "Bart Schaefer" <schaefer@brasslantern.com>
To: Sven Guckes <guckes@math.fu-berlin.de>, zsh-users@math.gatech.edu
Subject: Re: lssum - summing up sizes of files
Date: Sat, 13 Jun 1998 17:20:57 -0700 [thread overview]
Message-ID: <980613172058.ZM28450@candle.brasslantern.com> (raw)
In-Reply-To: <19980613211233.A5224@math.fu-berlin.de>
In-Reply-To: <19980613235639.B5224@math.fu-berlin.de>
On Jun 13, 9:12pm, Sven Guckes wrote:
} Subject: Re: lssum - summing up sizes of files
}
} Doesn't everybody need this kind of thing every day?
I've been using unix for almost fourteen years now (gaah) and I can count
the times I've *needed* that on one hand. There were a few more times
when I was just curious, but then "wc -c * | tail -1" seemed adequate.
} > stat -A sizes +size $*
}
} Looks cool! We have zsh-3.1.2 here - will that version work?
I think the stat module is available for 3.1.2.
} Or is are "modules" a new feature of zsh-3.1.4?
No, but they're not in 3.0.
} How do you "load" those modules, anyway?
They're either built in, or if zsh was compiled for dynamic loading, then
you use the "zmodload" command. "zmodload" works a lot like "autoload",
but searches $MODULE_PATH for a shared library (.so) named for the module.
On Jun 13, 11:56pm, Sven Guckes wrote:
} Subject: Re: lssum - summing up sizes of files
}
} Sure, but - if the zsh has to do globbing anyway
} then why not have it looks at the files, too?
Most globbing can be done by reading the directory structure. Looking at
the files too is much more expensive.
} As there is a GLOBBING modifier for "size"
File-statistics glob qualifiers (modifiers are qualifiers that change
the string that is returned after the glob succeeds) are used only to
eliminate a file after all the fast pattern matching on its name has
succeeded.
} there must be a nice way to extract the size of files somehow.
That's what the "stat" module is for.
An interesting alternative to the "stat" module would be a glob modifier
that replaces the file name with one of its statistics. E.g.
echo *(D:#G)
might echo the group-ids of every file in the current directory (:#L for
the size, :#m for the mod time, etc.). I don't plan to hold my breath
expecting someone to implement this, though.
} Would be nice if there was an easy way to get the sum
} without writing an explicit loop for this, too.
That's an awfully special special-case. Is there some kind of general
loop replacement that you're thinking of?
--
Bart Schaefer Brass Lantern Enterprises
http://www.well.com/user/barts http://www.brasslantern.com
prev parent reply other threads:[~1998-06-14 0:27 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
1998-06-13 16:10 Sven Guckes
1998-06-13 17:06 ` Bart Schaefer
1998-06-13 19:12 ` Sven Guckes
1998-06-13 19:33 ` adding examples to man pages Richard Coleman
1998-06-13 19:38 ` lssum - summing up sizes of files Danek Duvall
1998-06-13 21:56 ` Sven Guckes
1998-06-14 0:20 ` Bart Schaefer [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=980613172058.ZM28450@candle.brasslantern.com \
--to=schaefer@brasslantern.com \
--cc=guckes@math.fu-berlin.de \
--cc=zsh-users@math.gatech.edu \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
Code repositories for project(s) associated with this public inbox
https://git.vuxu.org/mirror/zsh/
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).