From mboxrd@z Thu Jan 1 00:00:00 1970 From: smiley@icebubble.org To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net> Date: Wed, 28 Dec 2011 21:48:46 +0000 Message-ID: <86zkecbc5d.fsf@cmarib.ramside> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Subject: [9fans] p9p vac/vacfs compatibility: uid/gid, ctime, 9P2000.L, 9pserve Topicbox-Message-UUID: 51a7801a-ead7-11e9-9d60-3106f5b1d025 Hello, I'm trying to use vac/vacfs on p9p, using the Linux v9fs driver (-t 9p) with unix domain sockets as transport (-o trans=unix). And I'm confused. :( When I mount the file system, all the files are owned by uid = gid = (-2 mod 2^32), unless I tell the Linux kernel to mount the fs as a specific user/group. In the latter case, all files present with the specified user/group (as expected). I can't find any way to get at the actual uid/gids owning the files/directories vac'd. I have tried explicitly specifying -o version=9p2000.u, but it exhibits the same symptoms: all files and directories appear to be owned by uid = gid = (-2 mod 2^32). All uids/gids in the vac'd tree existed in /etc/{passwd,group} at the time of vac'ing, as well as at mount time. So, I took a little looksie in /usr/lib/plan9/src/cmd/vac/... ---------------------------------------------------------------- /usr/lib/plan9/src/cmd/vac/vac.c:48 #ifdef PLAN9PORT /* * We're between a rock and a hard place here. * The pw library (getpwnam, etc.) reads the * password and group files into an on-stack buffer, * so if you have some huge groups, you overflow * the stack. Because of this, the thread library turns * it off by default, so that dirstat returns "14571" instead of "rsc". * But for vac we want names. So cautiously turn the pwlibrary * back on (see threadmain) and make the main thread stack huge. */ extern int _p9usepwlibrary; int mainstacksize = 4*1024*1024; #endif ---------------------------------------------------------------- /usr/lib/plan9/src/cmd/vac/vac.c:399 vd->uid = dir->uid; vd->gid = dir->gid; vd->mid = dir->muid; ---------------------------------------------------------------- So, the source *appears* to stow uid/gid to venti as character strings. BTW, ---------------------------------------------------------------- /usr/lib/plan9/src/cmd/vac/vac.c:406 vd->ctime = dir->mtime; /* ctime: not available on plan 9 */ ---------------------------------------------------------------- Does this mean that vac doesn't store ctimes on p9p? Shouldn't this be wrapped in something like: ---------------------------------------------------------------- #ifdef PLAN9PORT vd->ctime = dir->ctime; /* ctime: available on p9p */ #else vd->ctime = dir->mtime; /* ctime: not available on plan 9 */ #endif ---------------------------------------------------------------- Then, we have: ---------------------------------------------------------------- /usr/lib/plan9/src/cmd/vac/vacfs.c:696 dir.atime = vd->atime; dir.mtime = vd->mtime; dir.length = vd->size; dir.name = vd->elem; dir.uid = vd->uid; dir.gid = vd->gid; dir.muid = vd->mid; #ifdef PLAN9PORT dir.ext = ext; dir.uidnum = atoi(vd->uid); dir.gidnum = atoi(vd->gid); #endif ---------------------------------------------------------------- Those dir.uidnum and dir.gidnum don't appear to be referenced anywhere else in the code. But it makes me wonder... are the character string uid/gids vac'd the textual names from /etc/{passwd,group}, or just ASCII versions of the numeric ids, a la format "%d"? And just what is "9P2000.L", anyway? I can't seem to find any documentation on it. Last, but not least, can anyone tell me why 9pserve is taking up so much CPU time? From the docs, it looks like 9pserve should just be muxing 9p connections and passes messages around. I had expected that disk bandwith would be the bottleneck in my setup. But, as it turns out, the bottleneck is actually this little fellow, 9pserve. I have no idea why it's taking so much CPU to multiplex 9p, especially since there's only one connection being "multiplexed"! Is there any way to bypass 9pserve (since I only need one connection for now), i.e., doing 9p on sockets or shared file descriptors? Thanks! -- +---------------------------------------------------------------+ |Smiley PGP key ID: BC549F8B | |Fingerprint: 9329 DB4A 30F5 6EDA D2BA 3489 DAB7 555A BC54 9F8B| +---------------------------------------------------------------+