From mboxrd@z Thu Jan 1 00:00:00 1970 Date: Fri, 9 Jun 2006 17:10:24 -0700 From: Roman Shaposhnick To: Fans of the OS Plan 9 from Bell Labs <9fans@cse.psu.edu> Subject: Re: [9fans] quantity vs. quality Message-ID: <20060610001024.GB2291@submarine> References: <20060610015739.GA21966@ionkov.net> <59f06232334eddd74b8f1c78efe5621b@quanstro.net> Mime-Version: 1.0 Content-Type: text/plain; charset=koi8-r Content-Disposition: inline In-Reply-To: <59f06232334eddd74b8f1c78efe5621b@quanstro.net> User-Agent: Mutt/1.4.2.1i Topicbox-Message-UUID: 66a0b91a-ead1-11e9-9d60-3106f5b1d025 On Fri, Jun 09, 2006 at 06:51:00PM -0500, quanstro@quanstro.net wrote: > > Or you have a file server that keeps some non-file server related state in > > memory. The unability to serve any more requests is fine as long as it can > > start serving them at some point later when there is more memory. The dying > > is not acceptible because the data kept in the memory is important. > > i'm skeptical that this is a real-world problem. i've not run out of memory > without hosing the system to the point where it needed to be rebooted. > > worse, all these failure modes need to be tested if this is production code. I believe it is to be a crucial issue here. True, what Latchesar is after is a fine goal, its just that I'm yet to see a production system where it can save you from a bigger trouble. May be I've been exceptionally unlucky, but that's a reality -- if you truly run out of memory you're screwed. Your escape strategies don't work, worse yet they are *very* likely to fail in the manner that you don't except in the layer that you have no knowledge about. Consider this -- what's worse: a fossil server that died and lost some of the requests or a server that tried to recover and committed random junk ? Thanks, Roman.