From: Sven Wischnowsky <wischnow@informatik.hu-berlin.de>
To: zsh-workers@sunsite.auc.dk
Subject: Re: Speaking of slow completion ...
Date: Thu, 8 Jun 2000 10:53:13 +0200 (MET DST) [thread overview]
Message-ID: <200006080853.KAA18466@beta.informatik.hu-berlin.de> (raw)
In-Reply-To: "Bart Schaefer"'s message of Wed, 7 Jun 2000 17:14:03 +0000
Bart Schaefer wrote:
> Have any of you with sourceforge logins tried completing `cd /h/g/z'?
>
> It's *minutes* before something comes back, the first time. Thereafter
> the filesystem info is cached (by the linux kernel, not in zsh) and it
> goes down to only several seconds. (E.g. if you were to log in and try it
> *right now*, it'd be fairly fast, because *I* just waited the minutes for
> the FS cache to load.)
>
> Any ideas on optimizing _path_files for directories with LOTS of subdirs?
I have been thinking about _path_files quite often. For one thing I
still think if any completion function gets C-code support, than it
should be _path_files, probably the function that gets called more
often than every other function. C-support should help us avoid all
that array-handling.
Another thing is the globbing. Because of possible `*' match specs,
_path_files has to always get all files (or directories), which is
slowish, of course. Even using the first character of the $PREFIX in
the globbing could speed things up significantly, but if that is one
of the anchors in a `r:|...=*' pattern, the result could be wrong.
Hm, maybe we should add C-support for getting `special' characters
with respect to the currently used match specs and _path_files could
then use the part of the $PREFIX (and $SUFFIX) that doesn't contain
such characters.
Another (more interesting) thing is trying to glob more than one
directory level at once. There's a problem with the expand styel,
though.
Felix Rosencrantz wrote:
> Something I suggested in workers/9630 would be to have a style that caused
> _path_files to basically do stats (-d/-f) on the leading portion of a path that
> is to be completed. If the leading part exists, don't attempt to do any
> completion on that part of the path, you've got what you want. Or check for
> existence before attempting to glob a name.
Seem like I didn't fully grok that at that time. Yes, indeed, doesn't
even sound hard to implement.
> ...
>
> Also, on a side note, I think there might be a bug in compadd (possibly
> matching) that causes it to get in a bad state if an interrupt is sent while it
> is working. I typically want to interrupt in situations when completion is
> slow. Completion works, but matching seems to have some problems. (I know,
> you probably want some those helpful details...)
Indeed, I do.
Bye
Sven
--
Sven Wischnowsky wischnow@informatik.hu-berlin.de
next reply other threads:[~2000-06-08 8:53 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2000-06-08 8:53 Sven Wischnowsky [this message]
-- strict thread matches above, loose matches on Subject: below --
2000-06-08 3:14 Felix Rosencrantz
2000-06-07 17:14 Bart Schaefer
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=200006080853.KAA18466@beta.informatik.hu-berlin.de \
--to=wischnow@informatik.hu-berlin.de \
--cc=zsh-workers@sunsite.auc.dk \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
Code repositories for project(s) associated with this public inbox
https://git.vuxu.org/mirror/zsh/
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).