From mboxrd@z Thu Jan 1 00:00:00 1970 Message-ID: Date: Fri, 9 Jun 2006 17:55:04 -0500 From: quanstro@quanstro.net To: 9fans@cse.psu.edu Subject: Re: [9fans] quantity vs. quality In-Reply-To: MIME-Version: 1.0 Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: 7bit Topicbox-Message-UUID: 65967d3e-ead1-11e9-9d60-3106f5b1d025 've written many production systems this way. i never once had a malloc failure that was not a result of a catastrophic bug or h/w failure. except in the case where you know you might be requesting an unreasonable amount of memory, i have always thought that it makes more sense to limit the number of failure states by just quitting when malloc fails. i want my programs to be as deterministic as possible. i would much rather have them die than get into some untested failure state. (does any project that tries to recover from malloc failure actually test failure at each recovery point?) the current fad is c++ exceptions. the idea seems okay, but the programs i know that make use of c++ this way (mozilla comes to mind) do not seem to be very stable or reliable. - erik On Fri Jun 9 17:53:03 CDT 2006, lionkov@lanl.gov wrote: > Another example is using emalloc in libraries. I agree that it is > much simpler to just give up when there is not enough memory (which > is also not very likely case), but is that how the code is supposed > to be written if you are not doing research? > > Thanks, > Lucho