Yes: I prefer things the way they are.

I don't mean an revolution. Rather a symbiosys.

You can add overloading to ML but you must then add lots of type annotations
to your code. For example, vector length:

  let length (x, y) = sqrt(x*.x +. y*.y)

becomes:

  let length (x : float, y : float) = sqrt(x*x + y*y)

So you've saved the "." three times at the cost of ": float" twice because the
overloaded * and + don't provide enough type information. You can complicate
the type inferer to counteract this but then other type errors will become
increasingly obscure and the programmer will be forced to learn the quirks
that you've added in order to debug their code.

3 solutions:
  * sqrt has type of float -> float and the compiler infers float arguments
  * you write +. and *. (at least once) (those operators could be still available, don't you think so?)
  * let the compiler overload your function ( length : (int * int) -> float; length : (float * float) -> float) and then drop the "int" one as you never ever used it in your code.
 
Finally, I don't want my types discovered at run-time because it makes my code
slower and uses more memory. I'd rather have to box manually, so fast code is
concise code.

More memory is used by polymorphic variants too. And the records could be optimised so that type information is only added when it is needed by the programm.

From my point of view, your suggestions are mostly steps backwards (towards
Lisp, C++ etc.).

Thank you for your comment :)

- Tom