with 8K names, using base64, one could encode 6111 bytes of data in the name. i just did a quick inventory[*] of my $home; 74% of my files have less than 6112 bytes of data.

[*]
% fn pctfileslessthan6k () {
x=`{du -na $home|wc -l}
y=`{du -na $home|awk '$1 < 6112 {print $0}'|wc -l}
pct=`{echo '2k ' $y $x ' / 100 * p' | dc}
echo $pct
}
% pctfileslessthan6k
74.00


On Sun, Jun 23, 2013 at 7:08 AM, erik quanstrom <quanstro@quanstro.net> wrote:
On Sun Jun 23 09:38:01 EDT 2013, arisawa@ar.aichi-u.ac.jp wrote:
> Thank you cinap,
>
> I tried to copy all my  Dropbox data to cwfs.
> the number of files that exceeded 144B name limit was only 3 in 40000 files.
> I will be happy if cwfs64x natively accepts longer name, but the requirement
> is almost endless. for example, OSX support 1024B names.
> I wonder if making NAMELEN larger is the only way  to handle the problem.

without a different structure, it is the only way to handle the problem.

a few things to keep in mind about file names.  file names when they
appear in 9p messages can't be split between messages.  this applies
to walk, create, stat or read (of parent directory).  i think this places
the restriction that maxnamelen <= IOUNIT - 43 bytes.  the distribution
limits IOUNIT through the mnt driver to 8192+24.  (9atom uses
6*8k+24)

there are two basic ways to change the format to deal with this
1.  provide an escape to point to auxillary storage.  this is kind to
existing storage.
2.  make the name (and thus the directory entry) variable length.

on our fs (which has python and some other nasties), the average
file length is 11.  making the blocks variable length could save 25%
(62 directory entries per buffer).  but it might be annoying to have
to migrate the whole fs.

so since there are so few long names, why not waste a whole block
on them?  if using the "standard" (ish) 8k raw block size (8180 for
data), the expansion of the header could be nil (through creative
encoding) and there would be 3 extra blocks taken for indirect names.
for your case, the cost for 144-byte file names would be that DIRPERBUF
goes from 47 to 31.  so most directories > 31 entries will take
1.5 x (in the big-O sense) their original space even if there are
no long names.

- erik