9fans - fans of the OS Plan 9 from Bell Labs
 help / color / mirror / Atom feed
From: Amol Dixit <amold@vmware.com>
To: "9fans@9fans.net" <9fans@9fans.net>
Subject: Re: [9fans] Duplicate fids on 9p server
Date: Tue, 21 Jan 2014 15:00:51 -0800	[thread overview]
Message-ID: <66EC9842-1F95-40F9-8765-E4C948BD2738@vmware.com> (raw)
In-Reply-To: <A5FB0A5F-01F4-4A75-AA56-E505E908B3A8@vmware.com>

[-- Attachment #1: Type: text/plain, Size: 3745 bytes --]

Thanks Charles. Interestingly, I had read your paper and was my primary reference so far (The Ubiquitous File Server).
Like you said the way to handle this is to maintain open fids as per-client state - it worked. I noticed linux/v9fs also does something similar, although it is not a 9P server per se. At some point in future, I would like to understand how in the case of clients sharing a single connection, the “agent managing the sharing arranges that no two clients chose the same fid”.

Amol

Charles Forsyth charles.forsyth at gmail.com <mailto:9fans%409fans.net?Subject=Re%3A%20%5B9fans%5D%20Duplicate%20fids%20on%209p%20server&In-Reply-To=%3CCAOw7k5ipuc8jd3d6vkyMKnh1-CYQ3VO1S2v39%3DSmxUWXvEohmw%40mail.gmail.com%3E>  wrote:


see intro(5): "All requests on a connection share the same fid space"

If several clients share one connection, as intro(5) says:
"the agent managing the sharing must arrange that no two clients choose the
same fid".
That happens for instance with many cpu server processes sharing one 9p
connection to the file server.
That won't apply in your case, unless 9pfuse doesn't distinguish the
different connections at the server.

It's worth (re)reading the description of the protocol in section 5 of the
manual to have a good grasp of
the details, even when they are encapsulated by a library such as lib9p.

"Basically the server should create new internal fids with ClientID+FID to
point to the *same* file  ...".

You don't see that in (say) ramfs, because it has a single connection (the
/srv file), and the kernel correctly manages
the fid space for all client processes. Most 9p services are implemented
that way.

The few existing servers that manage distinct connections (eg, network
connections),
have per-connection state, and put the fid -> file map in that state.
(There isn't any need
to have an extra level of "internal fids", just keep a separate map per
connection.)
In fossil, for instance:

struct Con {
    ...
Fid* fidhash[NFidHash];   /* per-connection Fid map */

struct Fid {
    ...
u32int fidno;      /* actual 32-bit fid number chosen by client */

        Con* con;                /* the Fid belongs to one connection */

File* file;                 /* the Fid points to a File */

On Jan 19, 2014, at 10:49 PM, Amol Dixit <amold@vmware.com<mailto:amold@vmware.com>> wrote:

Hi,
I am pretty new to 9P and I was attempting to write a basic file server for 9P clients.
I am using 9pfuse on the client end to connect to my server. I have a question about *fid* alloction on the server.

In the sample server implementation of plan9port lib9p/{ramfs.c, srv.c} … the server code returns Edupfid if a particular fid is already used (due to attach, walk, open etc).
So for a single client, the server works just fine — fids are allocated in order 0, 1, 2… etc.
If the second client using 9pfuse tries to attach to this server it will request the same sequence of fids 0, 1, 2… and so on. The server code will immediately reject these requests with Edupfid if those fids are currently in use.

I first assumed the client will correct itself and retry the same request with a new fid. On more thought I realized that the server should be the one differentiating between the clients even if they request the same fids for attach/open/walk etc. Am I right in this assumption? Basically the server should create new internal fids with ClientID+FID to point to the *same* file and maintain that mapping….so that all clients (9pfuse based in this case) still see its sequence (0, 1, 2…), but the server internally maps them to a different IDs pointing to the same file.

Hope above question makes sense.

Thanks in advance,
Amol


[-- Attachment #2: Type: text/html, Size: 4555 bytes --]

  parent reply	other threads:[~2014-01-21 23:00 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-01-20  6:49 Amol Dixit
2014-01-20 11:37 ` Charles Forsyth
2014-01-21 23:00 ` Amol Dixit [this message]
2014-01-21 23:14   ` Charles Forsyth

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=66EC9842-1F95-40F9-8765-E4C948BD2738@vmware.com \
    --to=amold@vmware.com \
    --cc=9fans@9fans.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).