On Wed, Sep 15, 2010 at 11:37 PM, Jon Harrop <jonathandeanharrop@googlemail.com> wrote:
> Hmm, this would only optimize the bytecode fetch/decode step of the ocaml
execution cycle. I am not sure that it will result in much real-world
speedup.

Would be interesting to try. I suspect the C compiler would then optimize
the sequences of operations on the data as well. For example, something like
vector addition.

> In fact, that seems to be the main problem with many of these so-called
JIT interpreters, which in my opinion, do not seem to have learnt from the
HAL architectures of IBM OS's etc. Was probably also the problem with
Transmeta; cheap compilation entails cheap performance.

Can you elaborate?


Well, what I would do is to apply a fully optimizing compiler from a proper hardware abstraction layer, whether it is JIT is irrelevant, but I do not see why the system would not start doing this as soon as the code is loaded in some place (and not when it starts to run). What is certain is that some simple transformation will not speed things up much.

The right way to do it is to determine hot blocks beforehand. Hot blocks can also be determined on the fly but I do not think that JIT is much needed. The time of the determination of hot blocks is most certainly not crucial to optimizations, although the more time compiler has, the better. Simple optimizations will not have much impact. It is not like you can undo the complexity of an optimizing compiler just because you have JIT.

In Transmeta's case, you can't translate an obsolete RISC code to efficient VLIW in real-time. This seems to be putting too much strain on the translation. Which ought to be obvious given the complexity of VLIW compilers? Sometimes outsiders have a better view.

Best,
 
--
Eray Ozkural, PhD candidate.  Comp. Sci. Dept., Bilkent University, Ankara
http://groups.yahoo.com/group/ai-philosophy
http://myspace.com/arizanesil http://myspace.com/malfunct