caml-list - the Caml user's mailing list
 help / color / mirror / Atom feed
From: Markus Mottl <markus.mottl@gmail.com>
To: Mohamed Iguernlala <iguer.auto@gmail.com>
Cc: Jesper Louis Andersen <jesper.louis.andersen@gmail.com>,
	OCaml List <caml-list@inria.fr>
Subject: Re: [Caml-list] Status of Flambda in OCaml 4.03
Date: Sun, 17 Apr 2016 11:43:01 -0400	[thread overview]
Message-ID: <CAP_800p7DoxRk2=Ka6ezLarbDASuHBwxxDf1eWwRD3S2Z0c6Xw@mail.gmail.com> (raw)
In-Reply-To: <5713508A.905@gmail.com>

Ultimately, comprehensive benchmarks will be necessary to determine
how much of a benefit Flambda really is.  But just looking at the
produced assembly and some anecdotal performance evidence, it seems
clear to me that Flambda is a substantial improvement.

If we judge a feature by its impact on how people write code, I can
already say it made a huge difference to me.  For example, before
Flambda was available, I had been using Camlp4 macros in a project,
because it was otherwise impossible to avoid significant performance
penalties, also in terms of memory consumption (closure sizes),
without a lot of code duplication.  Now I can completely avoid Camlp4
while at the same time writing much more elegant, pure OCaml that
compiles to equivalently efficient machine code - if not better.

FWIW, I usually also use "-O3 -unbox-closures", which seemed to work
best in some ad-hoc tests.

On Sun, Apr 17, 2016 at 4:59 AM, Mohamed Iguernlala
<iguer.auto@gmail.com> wrote:
> Good results, but it would be better to compare with 4.03.0+trunk.
>
> Have you used particular options for flambda (eg. -O3 -unbox-closures) ?
> have you modified the default value of -inline option ?
>
> I noticed better performances with "-O3 -unbox-closures" on Alt-Ergo, but
> I have probably to ajust the value of "-inline" (which is currently set to
> 100).
>
> - Mohamed.
>
>
> Le 17/04/2016 10:43, Jesper Louis Andersen a écrit :
>
> Tried `opam switch ocaml-4.03.0+trunk+flambda` on the Transit format
> encoder/decoder i have. I wanted to see how much faster flambda would make
> the code, since I've heard 20% and 30% thrown around. It is not very
> optimized code, and in particular, the encoder path is rather elegant, but
> awfully slow. Well, not anymore:
>
> 4.02:
>  Name         Time/Run      mWd/Run   mjWd/Run   Prom/Run   Percentage
> ------------ ---------- ------------ ---------- ---------- ------------
>  decode         2.12ms     352.86kw    34.86kw    34.86kw       27.88%
>  encode         5.07ms     647.93kw   263.69kw   250.40kw       66.70%
>  round_trip     7.61ms   1_000.79kw   298.54kw   285.26kw      100.00%
> 4.03.0+trunk+flambda:
>
> │ Name       │ Time/Run │  mWd/Run │ mjWd/Run │ Prom/Run │ Percentage │
>
> │ decode     │   2.04ms │ 319.83kw │  35.94kw │  35.94kw │     43.97% │
> │ encode     │   2.65ms │ 422.67kw │ 130.88kw │ 117.59kw │     56.95% │
> │ round_trip │   4.65ms │ 742.50kw │ 164.85kw │ 151.56kw │    100.00% │
>
> Pretty impressive result. Note the heavyweight lifting is due to the yajl
> JSON parser and this poses a lower bound. But I think the speedup in the
> encode-path is rather good.
>
> Note that the benchmark is probably flawed and some time passed between
> these two runs, so there might be a confounder hidden in other fixes, either
> to yajl, or to other parts of the compiler toolchain. However, I think the
> result itself stands since in practice, my encoding time was just cut in
> half.
>
>



-- 
Markus Mottl        http://www.ocaml.info        markus.mottl@gmail.com

  reply	other threads:[~2016-04-17 15:43 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-03-08 22:10 Markus Mottl
2016-03-08 22:53 ` Alain Frisch
2016-03-09  3:55   ` Markus Mottl
2016-03-09  7:14     ` Mark Shinwell
2016-03-10  0:59       ` Markus Mottl
2016-03-10  1:32         ` Yotam Barnoy
2016-03-10  1:43           ` Markus Mottl
2016-03-10  7:20             ` Mark Shinwell
2016-03-10 15:32               ` Markus Mottl
2016-03-10 15:49                 ` Gabriel Scherer
2016-04-17  8:43                   ` Jesper Louis Andersen
2016-04-17  8:59                     ` Mohamed Iguernlala
2016-04-17 15:43                       ` Markus Mottl [this message]
2016-03-10 20:12                 ` [Caml-list] <DKIM> " Pierre Chambart
2016-03-10 21:08                   ` Markus Mottl
2016-03-10 22:51                   ` Gerd Stolpmann
2016-03-11  8:59                     ` Mark Shinwell
2016-03-11  9:05                       ` Mark Shinwell
2016-03-11  9:09                       ` Alain Frisch
2016-03-11  9:26                         ` Mark Shinwell
2016-03-11 14:48                           ` Yotam Barnoy
2016-03-11 15:09                             ` Jesper Louis Andersen
2016-03-11 16:58                       ` Markus Mottl

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAP_800p7DoxRk2=Ka6ezLarbDASuHBwxxDf1eWwRD3S2Z0c6Xw@mail.gmail.com' \
    --to=markus.mottl@gmail.com \
    --cc=caml-list@inria.fr \
    --cc=iguer.auto@gmail.com \
    --cc=jesper.louis.andersen@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).