From mboxrd@z Thu Jan 1 00:00:00 1970 MIME-Version: 1.0 In-Reply-To: References: Date: Tue, 15 Jun 2010 19:01:54 -0400 Message-ID: From: "Devon H. O'Dell" To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net> Content-Type: text/plain; charset=ISO-8859-1 Subject: Re: [9fans] Inducing artificial latency Topicbox-Message-UUID: 34196730-ead6-11e9-9d60-3106f5b1d025 2010/6/15 John Floren : > On Tue, Jun 15, 2010 at 5:45 PM, Devon H. O'Dell wrote: >> 2010/6/15 John Floren : >>> I'm going to be doing some work with 9P and high-latency links this >>> summer and fall. I need to be able to test things over a high-latency >>> network, but since I may be modifying the kernel, running stuff on >>> e.g. mordor is not the best option. I have enough systems here to do >>> the tests, I just need to artificially add latency between them. >>> >>> I've come up with a basic idea, but before I go diving in I want to >>> run it by 9fans and get opinions. What I'm thinking is writing a >>> synthetic file system that will collect writes to /net; to simulate a >>> high-latency file copy, you would run this synthetic fs, then do "9fs >>> remote; cp /n/remote/somefile .". If it's a control message, that gets >>> sent to the file immediately, but if it's a data write, that data >>> actually gets held in a queue until some amount of time (say 50ms) has >>> passed, to simulate network lag. After that time is up, the fs writes >>> to the underlying file, the data goes out, etc. >>> >>> I have not really done any networking stuff or filesystems work on >>> Plan 9; thus far, I've confined myself to kernel work. I could be >>> completely mistaken about how 9P and networking behave, so I'm asking >>> for opinions and suggestions. For my part, I'm trying to find good >>> example filesystems to read (I've been looking at gpsfs, for >>> instance). >> >> This seems reasonable, but remember that you're only going to be able >> to simulate latency in one direction as far as the machine is >> concerned. So you'll want to have the same configuration on both >> machines, otherwise things will probably look weird. I'd recommend >> doing this at a lower level, perhaps introducing a short sleep in IP >> output so that the latency is consistent across all requests. And of >> course both machines would need to have this configuration. You could >> have the stack export another file that you can tune if you want to >> maintain a file-system approach. But I think a synthetic FS for this >> is overkill. >> > > My plan was to have each side running the fs. I had thought about > sticking a sleep in the kernel's ip code right before it writes to the > device, but since I planned on netbooting the test systems (so much > easier to bring in a new kernel), I thought it might be advantageous > to only slow down for one process. I suppose I could make it sleep > conditionally, like make the process write to a ctl file if it wants > latency, then sleep only for that pid and its children. It is actually possible to get to the process sending the packet in the IP stack. --dho > > John > -- > "With MPI, familiarity breeds contempt. Contempt and nausea. Contempt, > nausea, and fear. Contempt, nausea, fear, and .." -- Ron Minnich > >