I would also point out that Facebook's experience with compilation speed under Flambda is actually rather worse than what we've found at Jane Street.  We are typically seeing closer to 2.5x rather than 5x at -O3.  However we are working to reduce this further.

Mark

On 24 September 2017 at 21:04, Josh Berdine <josh@berdine.net> wrote:
On Sep 24, 2017, at 12:28 PM, Yaron Minsky <yminsky@janestreet.com> wrote:
For what it's worth, the main blocker for us turning Flambda on by default in classic mode is getting build artifact size and compilation speed down basically to the same level as closure-compilation. We're getting pretty close to that goal, though it will take a bit more time to get the improvements in question upstreamed.

So getting Flambda enabled by default isn't that far away (though most of the real benefits will require -O3, which will still lengthen compilation by quite a bit.)

Another data point on this: at facebook we recently switched the production infer binaries over to flambda -O3 (https://github.com/facebook/infer/commit/f8d7c810452ce3a4d2e7027e38f5d00426a2a917). For local builds during development, we usually build without flambda, or actually even just bytecode. But for infer, flambda -O3 is worth 15-20% elapsed (~25% cpu) time, so it does not take an abnormal analysis run before that pays off the ~5x compile time deficit. (Given that we have to distribute a custom clang with the analyzer, build artifact size is basically in the noise.)

Cheers, Josh