I would like to pose one really perverse question (perverse because it
mentions overloading).
I have done a lot of thinking over the subject of type inference with
overloading. I have also read a lot, however, no paper satisfied me. I don't
like constraints (neither
GCamlnor System
CT like) as i find them too
difficult for the user to understand.
I have been trying to think of another mechanism for inferring overloaded
types, but have yet been unsuccessful.
Does anyone have any idea what kind of algorithm and structures would the
compiler need to deploy to correctly infer the types for these functions:
a : int -> float -> int
a : float -> int -> int
b : int -> int -> int
b : float -> int -> int
let f1 x y =
let z = a x y in
b x y + z
let f1 x y z =
let r = a x z in
let s = a z y in
b x y + r * s
It is pretty clear to the human that f1 has type float -> int -> int,
however, it takes a bit more thinking to realize that the type of f1 is int
-> int -> float -> int. However, while human sees the whole "program" in one
glance, the computer cannot perceive a picture as a whole. Thus, some
supposedly pretty complicated algorithm should be used.
Anyone has any idea?
Tom