From mboxrd@z Thu Jan 1 00:00:00 1970 Received: (from majordomo@localhost) by pauillac.inria.fr (8.7.6/8.7.3) id OAA10505; Mon, 6 Sep 2004 14:59:10 +0200 (MET DST) X-Authentication-Warning: pauillac.inria.fr: majordomo set sender to owner-caml-list@pauillac.inria.fr using -f Received: from nez-perce.inria.fr (nez-perce.inria.fr [192.93.2.78]) by pauillac.inria.fr (8.7.6/8.7.3) with ESMTP id OAA12302 for ; Mon, 6 Sep 2004 14:59:09 +0200 (MET DST) Received: from ptb-relay01.plus.net (ptb-relay01.plus.net [212.159.14.212]) by nez-perce.inria.fr (8.13.0/8.13.0) with ESMTP id i86Cx9dc019066 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO) for ; Mon, 6 Sep 2004 14:59:09 +0200 Received: from [80.229.56.224] (helo=chetara) by ptb-relay01.plus.net with esmtp (Exim) id 1C4J5P-000O83-Rk for caml-list@pauillac.inria.fr; Mon, 06 Sep 2004 12:59:07 +0000 From: Jon Harrop To: caml-list Subject: Re: [Caml-list] laziness Date: Mon, 6 Sep 2004 13:55:05 +0100 User-Agent: KMail/1.6.2 References: <413879B6@webmail> <20040906005741.GA20406@annexia.org> In-Reply-To: <20040906005741.GA20406@annexia.org> MIME-Version: 1.0 Content-Disposition: inline Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Message-Id: <200409061355.05523.jon@jdh30.plus.com> X-Miltered: at nez-perce with ID 413C5F1D.000 by Joe's j-chkmail (http://j-chkmail.ensmp.fr)! X-Loop: caml-list@inria.fr X-Spam: no; 0.00; caml-list:01 2004:99 bug:01 haskell:01 haskell's:01 bug:01 recurrence:01 discretion:99 predicate:01 squares:01 squares:01 recurrence:01 memoization:01 troll:01 avoided:01 Sender: owner-caml-list@pauillac.inria.fr Precedence: bulk On Monday 06 September 2004 01:57, Richard Jones wrote: > One thing that worries me about laziness. > > Doesn't laziness often indicate a bug in the code? No. Hence some languages (like Haskell) are completely lazy. Some people=20 advocate the use of approaches which are lazy by default (like Haskell's),= =20 others prefer to use laziness explicitly, only when they feel it is=20 necessary. The principle problem with lots of laziness is the difficulty in= =20 predicting memory usage, which is very implementation dependent. > ie. You've=20 > written an expression in the program, but that expression is never > used. This is dead code, right? Hence a bug? Laziness is used when there is code which *might* need executing, not when = an=20 expression is never going to be executed. Examples include recurrence relations, when the code will be executed until= =20 the base case is reached. This is true for lazy lists. Another example is=20 propagators (where further computations may be performed, up to the=20 discretion of a predicate). [lazy list] > ... > But in a sense this contains dead code too. You've written down your > desire to construct all the squares, and then later you change your > mind and take only the first 10. You haven't written down a "desire to construct all squares" any more than= =20 writing the equivalent recurrence relation does: x_0 =3D 0 x_(n+1) =3D 1 + 2 x_n You have simply defined how any square may be computed. > In OCaml you'd write this correctly as something like: > > List.map (fun x -> x * x) (range 1 10) This assumes that you know the 10 in advance, it also loses out on=20 memoization. What if you want to compute up to 10, 12 and 14? > (Sorry if this appears like a troll, but actually I'm genuinely > interested in whether laziness is useful, or indicates a bug). Laziness can definitely be useful. Whilst writing a compiler, I recently no= ted=20 that my computing and storing the set of types used in each expression and= =20 subexpression was not always used. By making the construction lazy, I avoid= ed=20 many unnecessary computations (but not all!, hence it isn't dead code) and= =20 the program now runs much faster. =46rom a theoretical standpoint, Okasaki pointed out that imperative progra= ms=20 are often faster because they choose to amortise the costs of a sequence of= =20 operations, grouping them into batches. In contrast, functional programs=20 often don't amortise costs. This gives them better worst case behaviour but= =20 worse average case behaviour. The Map and Hashtbl data structures are fine= =20 examples of this. Okasaki suggested that the solution might be to use=20 laziness in functional approaches to amortise the costs of operations, lazi= ly=20 storing the necessary changes and then forcing the evaluation of all=20 outstanding changes only when necessary. So laziness can be used as an optimisation. Cheers, Jon. ------------------- To unsubscribe, mail caml-list-request@inria.fr Archives: http://caml.inria.fr Bug reports: http://caml.inria.fr/bin/caml-bugs FAQ: http://caml.inria.fr/FAQ/ Beginner's list: http://groups.yahoo.com/group/ocaml_beginners