From mboxrd@z Thu Jan 1 00:00:00 1970 MIME-Version: 1.0 In-Reply-To: References: Date: Wed, 16 Jun 2010 19:04:10 +0000 Message-ID: From: John Floren To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net> Content-Type: text/plain; charset=UTF-8 Subject: Re: [9fans] Inducing artificial latency Topicbox-Message-UUID: 3465e1be-ead6-11e9-9d60-3106f5b1d025 On Tue, Jun 15, 2010 at 9:45 PM, Devon H. O'Dell wrote: > 2010/6/15 John Floren : >> I'm going to be doing some work with 9P and high-latency links this >> summer and fall. I need to be able to test things over a high-latency >> network, but since I may be modifying the kernel, running stuff on >> e.g. mordor is not the best option. I have enough systems here to do >> the tests, I just need to artificially add latency between them. >> >> I've come up with a basic idea, but before I go diving in I want to >> run it by 9fans and get opinions. What I'm thinking is writing a >> synthetic file system that will collect writes to /net; to simulate a >> high-latency file copy, you would run this synthetic fs, then do "9fs >> remote; cp /n/remote/somefile .". If it's a control message, that gets >> sent to the file immediately, but if it's a data write, that data >> actually gets held in a queue until some amount of time (say 50ms) has >> passed, to simulate network lag. After that time is up, the fs writes >> to the underlying file, the data goes out, etc. >> >> I have not really done any networking stuff or filesystems work on >> Plan 9; thus far, I've confined myself to kernel work. I could be >> completely mistaken about how 9P and networking behave, so I'm asking >> for opinions and suggestions. For my part, I'm trying to find good >> example filesystems to read (I've been looking at gpsfs, for >> instance). > > This seems reasonable, but remember that you're only going to be able > to simulate latency in one direction as far as the machine is > concerned. So you'll want to have the same configuration on both > machines, otherwise things will probably look weird. I'd recommend > doing this at a lower level, perhaps introducing a short sleep in IP > output so that the latency is consistent across all requests. And of > course both machines would need to have this configuration. You could > have the stack export another file that you can tune if you want to > maintain a file-system approach. But I think a synthetic FS for this > is overkill. > > If you're on a local network, this should do the trick, otherwise you > may need to do some adaptive latency. > > --dho I ended up just sticking a sleep in /sys/src/9/ip/devip.c right before data gets written to the device. This seems to add the desired latency with no trouble. I've decided that when running tests I'll probably just boot the testing systems off their own local fossils; this should help prevent the latency from adding additional time as binaries are fetched from the (now "distant") file server. Thanks! John -- "With MPI, familiarity breeds contempt. Contempt and nausea. Contempt, nausea, and fear. Contempt, nausea, fear, and .." -- Ron Minnich