Speeding up real-world applications can actually range from being very easy to incredibly difficult, depending on the facilities they rely on. And we are far from finished: while number-crunching applications can see substantial speed-ups, Rails is currently not accelerated by our modified interpreters.
As a services business, however, the CrossTwine Linker kernel allows us to quickly develop language-, domain-, and even problem-specific interpreters. In effect, we can “cheat” as much as we want if we know what the target problem is. (Cheat safely, that is; a problem-specific customization would retain 100% compatibility with the language—it would just be much faster on some kind of workloads.)
So while our demos currently feature generic interpreters for Python 2.6 (only in the labs for now) and 3.1, plus Ruby 1.8 and 1.9, we can “easily” whip up an enhanced custom interpreter tuned for whatever performance problem you are experiencing; and we can decide to integrate these improvements into the mainline on a case-by-case basis.
How similar/dissimilar is CrossTwine Linker from Pypy? It sounds quite similar from what I was able to read, in the sense that it is a JIT VM with pluggable frontends. Does it support pluggable backends as well, or just the front end?
There are quite a few similarities, but the design trade-offs are radically different:
* CrossTwine Linker is designed to be integrated into an existing language interpreter, without modifying its behavior besides improving execution performance. Any other difference (including when dealing with source and binary C/C++ extensions) is probably a bug.
* Is is not a VM (nor a VM compiler), but rather a (constantly evolving) set of C++ engine pieces which can be adapted using “policy-based design.” So while the demos aim to be generally fast, we can tune the engine and its adapters for any specific use-case; e.g.: you embed the Python interpreter in an application using the standard C API (no lock-in), and we make sure the binding behaves optimally with your application objects. (Note: we can also implement the binding as part of the service.)
* As such, everything is pluggable—except that we only target native hardware (PyPy is much more ambitious). But that's theory; I'll defer further comments on the backend to when we get the chance to implement a second one!
[Edit] Note that this means we support the full standard library—and all of its quirks—from day one. Which is both a curse and a blessing, because it is written with interpreted execution in mind, but that can be improved as time passes.
I applaud your efforts. One thing to note thought is that the market-share of Python 2.x is a lot larger than Python 3.1 - - and I think this will be the case for some time, as Python 3.x requires lots of porting.
It would be great if you could optimize an interpreter for web-applications - where Python is used a lot. I would be very interested in paying for an web-application optimized interpreter - - and you could probably do this optimization more generally?
You're not the only one to mention Python 2.x, and we've learned quite a few things about the language since we started this effort. :) I have updated the web page to mention that we now have fully-functional xtpython2.6 in the lab! It will ship with the alpha 2 release, which is due around mid-may.
Even I was surprised that coming up with this was only a couple of days of work, because the two versions of the interpreter are very similar in structure, so most adapters could be reused as-is (Ruby 1.9 → 1.8 was more… challenging, to say the least).
I only started looking into web applications, and what can be done about them. It is a more delicate target than e.g. scientific computing, because many pieces (a lot of which are written in C) are involved at each request. Interestingly, while Django is currently seeing very modest speedups (4%) on the unladen-swallow benchmarks, Spitfire is up by 70%. Oh well, this wouldn't be as fun if my TODO list were too short, would it?
Speeding up real-world applications can actually range from being very easy to incredibly difficult, depending on the facilities they rely on. And we are far from finished: while number-crunching applications can see substantial speed-ups, Rails is currently not accelerated by our modified interpreters.
As a services business, however, the CrossTwine Linker kernel allows us to quickly develop language-, domain-, and even problem-specific interpreters. In effect, we can “cheat” as much as we want if we know what the target problem is. (Cheat safely, that is; a problem-specific customization would retain 100% compatibility with the language—it would just be much faster on some kind of workloads.)
So while our demos currently feature generic interpreters for Python 2.6 (only in the labs for now) and 3.1, plus Ruby 1.8 and 1.9, we can “easily” whip up an enhanced custom interpreter tuned for whatever performance problem you are experiencing; and we can decide to integrate these improvements into the mainline on a case-by-case basis.