From mboxrd@z Thu Jan 1 00:00:00 1970 Message-ID: <59f06232334eddd74b8f1c78efe5621b@quanstro.net> Date: Fri, 9 Jun 2006 18:51:00 -0500 From: quanstro@quanstro.net To: 9fans@cse.psu.edu Subject: Re: [9fans] quantity vs. quality In-Reply-To: <20060610015739.GA21966@ionkov.net> MIME-Version: 1.0 Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: 7bit Topicbox-Message-UUID: 6624f384-ead1-11e9-9d60-3106f5b1d025 On Fri Jun 9 18:48:44 CDT 2006, lucho@gmx.net wrote: > > > > what is the senerio you're thinking of where malloc could fail > > and you can recover? > > > Let's say you have a fossil like file server and you cannot malloc memory to > process new requests. Do you want to flush the data buffers back to the disk > before you die, you you want to die in some library without any flushing. have you had a problem with fossil failing in this way? i'm not saying you allocate memory like crazy and don't worry about running out of memory. i'm sure that fossil is very careful about allocating memory for requests. > > Or you have a file server that keeps some non-file server related state in > memory. The unability to serve any more requests is fine as long as it can > start serving them at some point later when there is more memory. The dying > is not acceptible because the data kept in the memory is important. i'm skeptical that this is a real-world problem. i've not run out of memory without hosing the system to the point where it needed to be rebooted. worse, all these failure modes need to be tested if this is production code. how many times have you seen the comment // client isn't going to check the return code, anyway sysfatal("out of memory"); or similar? i'm pretty sure these comments are written be folks who've been bitten by untested or inconsistant error recovery code. - erik