zsh-workers
 help / color / mirror / code / Atom feed
* excessive memory usage?
@ 2000-07-19  8:59 Clint Adams
  2000-07-19  9:13 ` Peter Stephenson
  0 siblings, 1 reply; 6+ messages in thread
From: Clint Adams @ 2000-07-19  8:59 UTC (permalink / raw)
  To: zsh-workers

I received this bug report last week.  Due to lack of time,
I haven't had a chance to look into it thoroughly, but I
just tried it out, and it did suck up (rather slowly) about
42 megs (temporarily) before it finally completed.

I'm guessing that this happens during compadd.  So is this
evidence of some inefficiency or is all that memory necessary?

As for the second issue, I grabbed a vera.index that is probably
more recent (it has 9109 lines), and zsh happily completes from all
of the 6380 unique values.  I assume that his 7617->5996 effect
is also from duplicate values.

----- Forwarded message -----

Hi,

I have ~/.zshfunc directory in $fpath, and I made _dict file and put it
there. That file contained:

#compdef dict
_arguments '*:dictword:_dictwords'

The file _dictwords in the same directory contained:

#autoload

local expl dictwords

dictwords=($(awk '{print $1}' /usr/share/dictd/*.index))

_wanted dictwords expl dictword \
  compadd -M 'm:{a-zA-Z}={A-Za-z} r:|=*' "$@" - "$dictwords[@]"

The intent was to get 'dict <TAB>' to complete words from dictd's
dictionaries.

I also tried this to define $dictwords:

(( $+_cache_dictwords )) || \
  : ${(A)_cache_dictwords:=${${(f)"$(</usr/share/dictd/wn.index)"}%%[   ]*}}

dictwords=("$_cache_dictwords[@]")

(wn.index is the largest file from there)

The result was the same for both cases, pressing 'dict <TAB>' produced an
unusable shell (^C or ^\ didn't exit), which sucked up loads of memory, 42MB
at one time (and then I killed it).

% awk '{print $1}' /usr/share/dictd/*.index | wc -lc
 143539 1203636

1.2MB of data shouldn't take that much memory to get parsed. :)
Note that it worked when I used just one smaller index file:

% awk '{print $1}' /usr/share/dictd/vera.index | wc -lc
   7617   35197

But, after typing 'dict <TAB>', zsh said:

zsh: do you wish to see all 5996 possibilities?

The array was limited to ~6000 entries?

Please fix this, TIA.

----- End forwarded message -----


^ permalink raw reply	[flat|nested] 6+ messages in thread
* Re: excessive memory usage?
@ 2000-07-19 13:07 Sven Wischnowsky
  2000-07-19 17:30 ` Clint Adams
  2000-07-20  6:14 ` Andrej Borsenkow
  0 siblings, 2 replies; 6+ messages in thread
From: Sven Wischnowsky @ 2000-07-19 13:07 UTC (permalink / raw)
  To: zsh-workers


I had seen this message on the debian list, but didn't have the time
to reply yet.


Clint Adams wrote:

> I received this bug report last week.  Due to lack of time,
> I haven't had a chance to look into it thoroughly, but I
> just tried it out, and it did suck up (rather slowly) about
> 42 megs (temporarily) before it finally completed.
> 
> I'm guessing that this happens during compadd.  So is this
> evidence of some inefficiency or is all that memory necessary?

As Peter already pointed out the problem is really that the function
passes down huge arrays and has to copy them several times and the
solution is to use the -a option to compadd. Some more comments below.

> As for the second issue, I grabbed a vera.index that is probably
> more recent (it has 9109 lines), and zsh happily completes from all
> of the 6380 unique values.  I assume that his 7617->5996 effect
> is also from duplicate values.

This is what I was thinking, too.

> ...
> 
> I have ~/.zshfunc directory in $fpath, and I made _dict file and put it
> there. That file contained:
> 
> #compdef dict
> _arguments '*:dictword:_dictwords'

If there isn't anything else in this file, then we don't need to use
_arguments. Because calling _arguments with only one `*:...' spec and
no options, no other arguments specs is the same as doing the action
from the `*:...' spec. Only slower.

> ...
> 
> I also tried this to define $dictwords:
> 
> (( $+_cache_dictwords )) || \
>   : ${(A)_cache_dictwords:=${${(f)"$(</usr/share/dictd/wn.index)"}%%[   ]*}}
> 
> dictwords=("$_cache_dictwords[@]")

For things that change as seldom as dictionaries, caching is
definetely a good idea. But no need to copy the cache-array into yet
another local array -- that would cause some more unnecessary
allocation and copying. Just use `compadd ... -a _cache_dictwords'.


And if this is ever intended to be included in the distribution, the
-M should be removed, there are enough styles around to give users the 
possibility to define them if they want them. In a simple personal
completion function, of course...


Bye
 Sven


--
Sven Wischnowsky                         wischnow@informatik.hu-berlin.de


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2000-07-20  6:14 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2000-07-19  8:59 excessive memory usage? Clint Adams
2000-07-19  9:13 ` Peter Stephenson
2000-07-19 17:16   ` Clint Adams
2000-07-19 13:07 Sven Wischnowsky
2000-07-19 17:30 ` Clint Adams
2000-07-20  6:14 ` Andrej Borsenkow

Code repositories for project(s) associated with this public inbox

	https://git.vuxu.org/mirror/zsh/

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).