TL;DR: there is no *generic* technique to improve any design, it's a methodology that consists in finding invariants that your functions respect, and tweaking their behavior so that they respect nicer, simpler invariants. The trick, in my experience, is that this practice naturally arises when you are careful about testing and have a random-testing library easily available.
Long version:
Quickcheck-style property testing ("generate randoms x, y, z and check that they verify this property") encourage users to formulate the invariants/properties of the function(s) they are testing as first-order formulas (usually forall-only). In my experience, this is an excellent mindset in which to put code authors at the moment they are designing and implementing the function (so these tests should come simultaneously, not after the implementation effort), because it makes you think about the properties the function should have, and this is a very effective way to make the right choices on corner cases: most choices will *not* respect nice properties, and those that do are the right ones.
In Batteries we use the qtest library (
https://github.com/vincent-hugot/iTeML ) to write inline random tests, but they are less common than unit tests (they take more effort to write). I just looked (git grep -A2 "\$Q") and they seem to fit in three big categories:
1. Round-trip tests
((decode (encode s) = s)
(eq li (li |> enum |> of_enum)).
2. Equivalence of the function implementation with a naive/simpler implementation,
(eq (filter p v) = (to_list v |> List.filter p |> of_list))
(popcount x = popcount_sparse x)
(to_list (List.fold_left insert empty l) = List.sort Pervasives.compare l)
3. A bunch of more diverse and less easy/generic to formulate tests.
The most benefits come from (3) of course, but (2) can also be a way to solve corner cases -- although usually you don't have to think this way to "know" what the right behavior is.
Some examples of (3) are the following
val edit_distance : string -> string -> int
(edit_distance s1 s2 = edit_distance s2 s1)
val nsplit : ('a -> bool) -> 'a list -> 'a list list
(xs = join sep (nsplit ((=) sep) xs))
(nsplit ((=) sep) la @ nsplit ((=) sep) lb = nsplit ((=) eq) (la @ [sep] @ lb)
In the case discussed in GPR#10,
val split : ~sep:string -> string -> string list
I found it rather hard to get a good design for the corner cases, for example what the value of (split ~sep sep) should be, or (split ~sep ""), or (split ~sep:"" s), or (split ~sep:"" ""). My solution to get a meaningful behavior on all those cases was to ask: what is a function (concat_splits) such that
split ~sep (sa ^ sb) = concat_splits (split ~sep sa) (split ~sep sb)
?
In more formal terms that corresponds to asking for split to transport the monoid structure of strings. Formally one would also need to specify (split ~sep "") separately, but in fact finding any reasonable concat_splits function also answers this question.
Daniel Bünzli independently designed this function on his end, thinking about different invariants, and we got the exact same behavior on both ends. That's anecdotal evidence that this approach can lead to more objective design choices.
(In fact I would say that property-testing is *more* useful for API guidance than actual testing -- at implementation-time or against regression; in my experience you always also write small unit tests to specifically exercise the corner cases you think about, and those tend to suffice to catch the implementation bugs or regressions that are easily found by testing.)