From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 20386 invoked from network); 28 Feb 2000 10:07:21 -0000 Received: from sunsite.auc.dk (130.225.51.30) by ns1.primenet.com.au with SMTP; 28 Feb 2000 10:07:21 -0000 Received: (qmail 27456 invoked by alias); 28 Feb 2000 10:07:14 -0000 Mailing-List: contact zsh-workers-help@sunsite.auc.dk; run by ezmlm Precedence: bulk X-No-Archive: yes X-Seq: 9898 Received: (qmail 27446 invoked from network); 28 Feb 2000 10:07:11 -0000 Date: Mon, 28 Feb 2000 11:07:05 +0100 (MET) Message-Id: <200002281007.LAA02352@beta.informatik.hu-berlin.de> From: Sven Wischnowsky To: zsh-workers@sunsite.auc.dk In-reply-to: "Bart Schaefer"'s message of Fri, 25 Feb 2000 17:35:53 +0000 Subject: Re: Precompiled wordcode zsh functions [ I implemented my ideas at the weekend and got to read this mail today, so I'm withholding the patch for now... ] Bart Schaefer wrote: > On Feb 25, 11:42am, Sven Wischnowsky wrote: > } Subject: Re: Precompiled wordcode zsh functions > } > } Add a builtin (`zcompile' if you wish), that gets a list of > } filenames. The first one is used as the file to write the code for all > } functions named by the other filenames into. These have to name > } existing function files (not necessarily in $fpath). So the generated > } file is a kind of digest containing the code for multiple functions. > } > } Then: $fpath may also contain names of such digest files. > > So far so good, though I'd still prefer if such a file could just sit > inside a directory in $fpath (or some other searched path) and be loaded > like any other autoloaded function. (Which means there needs to be some > sort of convention for choosing the compiled file if both compiled and > uncompiled functions are present.) Hm. If we think about one file per function, we should certainly make them be found in the directories in $fpath. But that would basically give us two types of dump-files, unless we make them detectable (e.g. by the .zwc extension you suggest) and make getfpfunc() search all .zwc files for the function we are trying to load. Hm or maybe to different kinds of lookup: if getfpfunc() finds out that one of the strings in $fpath isn't a directory containing a file with the name searched, it tries to use it as a dump-file containing multiple functions and checks if it contains the definition for the function searched (that' basically how the stuff I wrote works). But if the directory from $fpath just being handled is a directory and it contains a file .zwc, we use that (at this time we could compare the modification times for and .zwc, of course). > } In getfpfunc() (that's where we load autoloaded functions), if the > } name of a digest file in $fpath is found, the file is searched for > } the definition of the function we are seeking. If it contains this > } function, the thing is mapped and the Eprog is set up. > > Hmm. Probably there'd have to be a "directory" at the top of the file > with the names and offsets (or some such) of all the functions therein. That's what my implementation does. Since this is currently only intended for files containing lots of functions, they are always mapped. Even mapped completely for now, could probably be changed to map them step by step as more and more functions from it are used. Although I'm really not that concerned about memory usage here. The completion function (only the _* files) take up somewhat less than 300KB, btw. > That header could also contain some flags determined at compile time, > such as whether the file should be mmap'd or merely read. Such a flag > would normally be computed by the compiler based on the size or some > such criteria, but could be overridden by an option to the "zcompile" > (or whatever) builtin. Thus if one wanted to have a lot of small files > with only one function each, the result would not be a zillion mmaps. Hm, yes, hadn't thought about that. I'm not so sure about the automatical detection of the flag since it would involve some kind of threshold. It's always so difficult to find a good value (a page size? per function or for the whole file if it contains more than one function?). > } One problem: should there be some warning if the digest file is older > } than the function file (if that is reachable through $fpath)? I.e. do > } we have to test that? > > I *think* emacs detects that condition only when the .el and .elc are > in the same directory. Certainly we shouldn't go searching the entire > fpath to verify every compiled function, particularly if there is more > than one function in each wordcode file. Yep. The implementation I have now does nothing about this, because it only thinks about `digest' files. > } Second problem: functions like _cvs that essentially just define lots > } of functions and re-define themselves[1]. > > I saw your follow-up, but one remark: That technique would no longer be > necessary because loading the wordcode file would immediately define all > the functions therein without having to execute one of them first. But it doesn't do any harm either -- it is very fast (with such a dump-file in your fpath those initial completion where the functions were loaded and parsed become, of course a lot faster and defining functions in functions from a dump-file is very fast, too). Bye Sven -- Sven Wischnowsky wischnow@informatik.hu-berlin.de