zsh-workers
 help / color / mirror / code / Atom feed
* Re: Speaking of slow completion ...
@ 2000-06-08  8:53 Sven Wischnowsky
  0 siblings, 0 replies; 3+ messages in thread
From: Sven Wischnowsky @ 2000-06-08  8:53 UTC (permalink / raw)
  To: zsh-workers


Bart Schaefer wrote:

> Have any of you with sourceforge logins tried completing `cd /h/g/z'?
> 
> It's *minutes* before something comes back, the first time.  Thereafter
> the filesystem info is cached (by the linux kernel, not in zsh) and it
> goes down to only several seconds. (E.g. if you were to log in and try it
> *right now*, it'd be fairly fast, because *I* just waited the minutes for
> the FS cache to load.)
> 
> Any ideas on optimizing _path_files for directories with LOTS of subdirs?

I have been thinking about _path_files quite often. For one thing I
still think if any completion function gets C-code support, than it
should be _path_files, probably the function that gets called more
often than every other function. C-support should help us avoid all
that array-handling.

Another thing is the globbing. Because of possible `*' match specs,
_path_files has to always get all files (or directories), which is
slowish, of course. Even using the first character of the $PREFIX in
the globbing could speed things up significantly, but if that is one
of the anchors in a `r:|...=*' pattern, the result could be wrong.

Hm, maybe we should add C-support for getting `special' characters
with respect to the currently used match specs and _path_files could
then use the part of the $PREFIX (and $SUFFIX) that doesn't contain
such characters.

Another (more interesting) thing is trying to glob more than one
directory level at once. There's a problem with the expand styel,
though.


Felix Rosencrantz wrote:

> Something I suggested in workers/9630 would be to have a style that caused 
> _path_files to basically do stats (-d/-f) on the leading portion of a path that
> is to be completed.  If the leading part exists, don't attempt to do any
> completion on that part of the path, you've got what you want.  Or check for
> existence before attempting to glob a name.

Seem like I didn't fully grok that at that time. Yes, indeed, doesn't
even sound hard to implement.

> ...
> 
> Also, on a side note, I think there might be a bug in compadd (possibly
> matching) that causes it to get in a bad state if an interrupt is sent while it
> is working.  I typically want to interrupt in situations when completion is
> slow.  Completion works, but matching seems to have some problems.  (I know,
> you probably want some those helpful details...)

Indeed, I do.

Bye
 Sven


--
Sven Wischnowsky                         wischnow@informatik.hu-berlin.de


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: Speaking of slow completion ...
@ 2000-06-08  3:14 Felix Rosencrantz
  0 siblings, 0 replies; 3+ messages in thread
From: Felix Rosencrantz @ 2000-06-08  3:14 UTC (permalink / raw)
  To: zsh-workers

--- Bart Schaefer <schaefer@candle.brasslantern.com> wrote:
>Any ideas on optimizing _path_files for directories with LOTS of subdirs?

Not sure what you can do if you have to wait for the OS.

Something I suggested in workers/9630 would be to have a style that caused 
_path_files to basically do stats (-d/-f) on the leading portion of a path that
is to be completed.  If the leading part exists, don't attempt to do any
completion on that part of the path, you've got what you want.  Or check for
existence before attempting to glob a name.

I think the stats will go much faster than the directory scans.

This doesn't really help in your example, since you want zsh to perform
completion in the big directories.  But if you know certain directories are
going to can cause you to wait for a while, you might be willing to type that
part of the path.

Also, on a side note, I think there might be a bug in compadd (possibly
matching) that causes it to get in a bad state if an interrupt is sent while it
is working.  I typically want to interrupt in situations when completion is
slow.  Completion works, but matching seems to have some problems.  (I know,
you probably want some those helpful details...)

-FR

__________________________________________________
Do You Yahoo!?
Yahoo! Photos -- now, 100 FREE prints!
http://photos.yahoo.com


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Speaking of slow completion ...
@ 2000-06-07 17:14 Bart Schaefer
  0 siblings, 0 replies; 3+ messages in thread
From: Bart Schaefer @ 2000-06-07 17:14 UTC (permalink / raw)
  To: zsh-workers

Have any of you with sourceforge logins tried completing `cd /h/g/z'?

It's *minutes* before something comes back, the first time.  Thereafter
the filesystem info is cached (by the linux kernel, not in zsh) and it
goes down to only several seconds. (E.g. if you were to log in and try it
*right now*, it'd be fairly fast, because *I* just waited the minutes for
the FS cache to load.)

Any ideas on optimizing _path_files for directories with LOTS of subdirs?

-- 
Bart Schaefer                                 Brass Lantern Enterprises
http://www.well.com/user/barts              http://www.brasslantern.com

Zsh: http://www.zsh.org | PHPerl Project: http://phperl.sourceforge.net   


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2000-06-08  8:53 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2000-06-08  8:53 Speaking of slow completion Sven Wischnowsky
  -- strict thread matches above, loose matches on Subject: below --
2000-06-08  3:14 Felix Rosencrantz
2000-06-07 17:14 Bart Schaefer

Code repositories for project(s) associated with this public inbox

	https://git.vuxu.org/mirror/zsh/

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).