It sure would be nice if OPAM happened to provide install-able OCaml binaries, but as real production systems are an amalgam of many different moving parts, I think it would solve only one small part of a larger problem. I think an increasingly popular solution - which is taking Malcom's idea to its extreme logical conclusion - is to deploy virtual machine images. In my company, for example, we build such images frequently as part of continuous integration and, during a deploy, just slowly cycle thousands of cloud instances over to a new image. LXC (chroot on steroids) is another popular way to instantiate this strategy with less overhead, and we've also experimented with CDE (http://www.pgbovine.net/cde.html).
I understand Facebook somehow manages to compile one ginormous binary, though: http://arstechnica.com/business/2012/04/exclusive-a-behind-the-scenes-look-at-facebook-release-engineering/


On Wed, May 29, 2013 at 11:02 PM, Malcolm Matalka <mmatalka@gmail.com> wrote:
No I'm not suggesting building on every machine.  I am suggesting using
a Build Server, such as Jenkins or Bamboo or Hydra, to build your system
and package it, then you can deploy it through another mechanism.

Opam generating RPMs (or packages for you) I believe is wrong because:

1 - There are a lot of options when it comes to building a package.
What should the init scripts look like?  Where should the various files
go?  What about configs, etc.  These are issues opam should not concern
it self with.  On top of that, there are already tools for building a
RPM packages from a directory structure, such as fpm.

2 - Which package systems should Opam support?  RPM, Deb?  I use Nix,
why don't you support me?  Some other guy uses something else, what
about him?  I would predict a lot of heated and ultimately fruitless ML
discussions about people wanting their package support in a particular way.

3 - *Maybe* Opam should support exporting the closure of a package to an
external directory structure, then you can package it in any way you
want.

However, IME, and someone with more knowledge in Ocaml please correct
me, if you are deploying to production on a lot of machines I don't
really see why you would want to install Ocaml libraries to those
machines.  Ocaml libraries are statically linked so copying the produced
binary should be sufficient.  This is how I deploy Ocaml apps to the
clusters I have worked on, with success.

Perhaps I am missing something?

If you want a 'deploy' command to be added to Opam that takes a
configuration and pushes builds to machines, I am vehemently opposed to
that.  If you want a mechanism for exporting a package closure someplace
else that you can package, I am less opposed to that.  But I don't
develop Opam so my opinion doesn't really matter :)

/M

Chet Murthy <murthy.chet@gmail.com> writes:

> I'm glad we're having this discussion.
>
> On Thursday, May 30, 2013 06:52:25 AM Malcolm Matalka wrote:
>> I think out would be wrong for opam to try to solve this problem.  There
>> are already many tools available for deploying (Ansible, Puppet, Chef,
>> Fabric, Capistrano).  Such a later can be build on top of opam of need be.
>
> I think this is incorrect.  Let me explain.
>
> (1) when we look at deploying complex collections of code/libs/data
> onto multiple machines, usually we assume that the code has already
> been built.
>
> (2) but let's first dispatch the case where the code has -not- been
> built.  In such a case, I presume you're proposing that the code be
> built on each machine, yes?
>
>   (a) this drastically increases the CPU required to perform upgrades
>   and deploys
>
>   (b) but far, far, far more importantly, it means that on each
>   machine, a nontrivially complex script runs that builds the actual
>   installed binaries.  If that script contains -any- nondeterminism or
>   environmental sensitivity, it could produce different results on
>   different machines.  The technical term is "version skew".
>
> In scale-out systems, this sort of "skew" is absolutely fatal, because
> it means that machines/nodes are not a priori interchangeable.  And
> all of fast-fail fault-tolerance depends on nodes being
> interchangeable.
>
> (3) But let's say that what you really mean is that we should use
> tools like puppet/chef/capistrano to copy collections of
> binaries/libs/data to target machines and install them.  These
> scripts/recipes are written by some person.  You could have equally
> well suggested that that person build Debian packages (or RPMs) of
> each OPAM package, writing out all the descriptions and manifests.
>
> And manually specifying all dependencies and requiremeents.
>
> Either way, that person is doing a job that OPAM already does a lot
> of, and does quite well.  Gosh, wouldn't it be nice if OPAM could
> generate those RPMs?  Well, it's a little more complicated than that,
> but really, not much more.  The complexity comes in that you -might-
> (I'm not saying I have this part figured out yet) want ways to
> -generalize- (say) the camlp5 package so that it could be installed on
> many different base OPAM installations.
>
> But setting aside that nice-to-have, imagine that OPAM knew how to
> generate RPMs from each package it installed, and from the ocaml+opam
> base itself.  You combine those, and you can:
>
>   (i) install ocaml, opam, and a bunch of packages
>
>   (ii) push a button, and out come a pile of RPMs, along with
>   dependencies amongst them (and hopefully on the relevant
>   environmental RPMs (e.g., libpcre-dev for pcre-ocaml, etc) so that
>   you can just stuff those RPMs into a YUM repo, go to a second box,
>   and say
>
>     "yum install opam ocaml pcre-ocaml"
>
>   and get everything slurped down and installed, just as if OPAM had
>   installed it all, package-by-package.
>
> --chet--
>
> P.S. And this doesn't even get into the unsuitability of chef/puppet
> for managing software package installation.  There's a reason that no
> distro uses such schemes to install the large and complex sets of
> packages needed to run amodern Linux box.  And why there is no Linux
> version of Microsorft's "DLL Hell".  Linux distros by and large (and
> esp Debian and Ubuntu) have worked hard to make package installation
> foolproof -- and chef/puppet etc are anything but.

--
Caml-list mailing list.  Subscription management and archives:
https://sympa.inria.fr/sympa/arc/caml-list
Beginner's list: http://groups.yahoo.com/group/ocaml_beginners
Bug reports: http://caml.inria.fr/bin/caml-bugs