If you care about CPU-heavy computation, you care about not using an interpreted language to do it, because if you pay a 10x performance penalty, that turns your 16-core machine back into an effective 1-core machine. (Apparent number mismatch to account for slowdowns and general amdahl's law.) And, with no offense intended to Goby, a brand new scripting language built on top of a language like Go (already ~2x-3x slower than C in general) could easily see performance penalties of 100x or 500x vs C. (Think logarithmic here.)
Even Go, at 2x-3x slower than C, is already not a terribly great choice for true CPU-intensive loads. It's fast enough it can fit in some scenarios, but if you really start to ramp up you're going to want to switch to something else.
Edit: Would someone like to explain what is wrong with the idea that people who care about CPU-heavy computation also need to care about the performance of the language they are using rather than just downmodding it? I find the idea that in one second, someone cares deeply enough about CPU performance to call their load "CPU-heavy" and want to learn how to use many cores to process it, but in the next second is oblivious to the issues of using languages that are very slow on the CPU to do the work, to be a very bizarre shape of concerns about performance. It's like someone asking how they can move ten tons of something from New York to LA as quickly as possible, but insisting that they will only use bicycles to do it. The fact that you may be able to work out a way to do it with only bicycles, even perhaps surprisingly quickly compared to what one's initial reaction may have been, isn't going to change the fact that it sure is weird how one moment you're concerned about doing it as quickly as possible and in the next moment completely oblivious to the performance consequences of the chosen tools.
> "If you care about CPU-heavy computation, you care about not using an interpreted language to do it, because if you pay a 10x performance penalty, that turns your 16-core machine back into an effective 1-core machine. "
> "Edit: Would someone like to explain what is wrong with the idea that people who care about CPU-heavy computation also need to care about the performance of the language they are using rather than just downmodding it?"
You first talk about interpreters and then you talk about performance. They're not perfectly correlated. Given how modern implementations of languages are not either simple token processing state machines of the 1980s or simple compilers of the same period, this equivocation of yours seems out of place in the 2010s. A proper interpreter like LuaJIT can not only reach very decent performance on computationally expensive stuff (1x-2x of C run time in Scimark 2, depending on the particular test, for example) but also allow for delaying computation to as late a time as possible and then generating specialized code based on the increased amount of information available. That can be done not only even across modules (which static compilers still struggle with without some kind of link-time optimizations) but also depending on actual data at run time (which static compilers are completely incapable of, unless they're somehow embedded into the final application - an option that, e.g., Lisp programs can use if they choose so).
Is a JIT, not an interpreter. JITs are fundamentally different. I seriously doubt Goby is a JIT yet, to the point I'm not even going to check the source.
LuaJIT is also an outlier. I consider it a solid point in favor of the argument that if you build a language for speed from day 1, you can do pretty well and still build in a lot of nice features. Even so, I understand LuaJIT had to drop some Lua features to get there. However, if you first design your language's features with a lot of focus on convenience, and then try to make them fast without compromise, you end up in the PHP/Python/Perl/Ruby/Javascript space, where no matter how much work you put into it you hit fundamental walls. (Yes, even JS with all modern JIT'ing is not really that fast of a language.) The counterargument to your point is that LuaJIT is pretty much all alone in its position on the performance, despite the fact that other seemingly-similar languages have had orders of magnitude more work poured into their JITs.
I think there's a lot of up-and-coming languages that have learned a lot about designing for performance and while, alas, LuaJIT's future seems dim, I believe a lot of languages like Nim and Crystal and even to some extent Go have learned about how to be nicer languages than C or C++ while not giving up tons of performance. LuaJIT, in my opinion, still has a place of honor in the history of programming languages, far outsized from its actual use.
(Rustaceans may be assured I have not forgotten them, I just think Rust is coming at this from a significantly different angle.)
> Even so, I understand LuaJIT had to drop some Lua features to get there.
Not true. LuaJIT is complete Lua. The difference, besides being jitted are: having parts written in assembly, super optimisations and things like FFI which cannot be written in C89. Main Lua uses nothing but C89 which makes it run on almost anything so this is not the case for LuaJIT. Also vanilla Lua is way smaller and the source code is cleaner simpler. Divergence in the language is not because LuaJIT dropped some features but because it was created while Lua was on version 5.1. Now Lua is on 5.3 and LuaJIT didn't catch up on everything yet, it is basically 5.1 compatible with some sprinkles of 5.2.
Goby doesn't have JIT. It just created 6 months ago so we have many things prior to performance improvements. And I'm not a programming language expert so it's too hard for me introducing JIT in my first language.
> Would someone like to explain what is wrong with the idea that people who care about CPU-heavy computation also need to care about the performance of the language they are using rather than just downmodding it?
I didn't downvote you, so i'm not sure, but maybe it was because of the condescending tone (assuming what they care about from a simple question and telling them what they should care for instead), aaand not answering their question about parallelism in the first place.
I didn't assume, the question contained "CPU-heavy", and the answer is relevant because it's a common misconception that you can make up for a slow language with parallelization, but you can't. If you've got a CPU-heavy task, you will find in practice that even using a lot of threads you'll be lucky to get even a 3x speedup on clock time, unless your problem is 100% strictly embarrassingly parallel. (I've tried a few times, which is where the 3x comes from. It's all I was able to get and my tasks were very close to embarassingly parallel, but the cruel nature of Amdahl's law is that it takes only very slight non-parallel components to wreck your speed.) It's not a viable solution in practice.
Didn't downvote, but perhaps a request for clarification; perhaps you could cite some benchmarks for these numbers? Esp. The 100-500x speed loss for interpreted languages.
"Esp. The 100-500x speed loss for interpreted languages."
First, let me remind you that you have to think logarithmically here, not in absolute terms. 1-5x is the same sized range as 100-500x, about half-an-order of magnitude. (Pedants will correctly observe it isn't actually half as half an order of magnitude is actually 3.16... but it's close enough for estimate work.)
I use 50x slower than C as my guideline for how fast conventional 1990s dynamic scripting languages running an a conventional interpreter are, based on: http://benchmarksgame.alioth.debian.org/u64q/which-programs-... You can see Erlang, Perl, Ruby, Smalltalk, Python3, Lua (not LuaJIT, Lua, huge difference), and Ruby. You can find Node.JS with all the mighty power of a JIT'd language down on the next graph all the way on the right, hanging out somewhere around 10x slower than C, which seems to be all you can practically expect from a JIT'd dynamic scripting language; excepting my comments about LuaJIT in the cousin comment, I haven't see anything that convinces me you can go any faster for that crop of languages. Of course if somebody produces a 2x-slower-than-C Python JIT, I'll just update my understanding rather than insist it can't exist. But at the moment I see no particular reason to think that's going to happen.
You can also see Go at 2-3x slower than C just a bit to the right. (This is why I say Go is pretty fast for a scripting language, but you can see it's not all that fast when compared to the conventionally compiled languages. It takes a non-trivial loss on both not doing a lot of optimization, and requiring a lot of indirect vtable lookups when you use interfaces heavily which C++ often avoids and Rust aggressively avoids whenever possible.)
100-500x speed loss is just an estimate for a brand new, unoptimized scripting language... and, actually, it's a rather generous one, it could go another order of magnitude or two quite easily, especially in the very early days. Note how while that may seem extreme, it's just another order of magnitude or so slower than the optimized dynamic scripting languages. For an unoptimized implementation, that's not necessarily a terrible estimate. As I understand it, Perl 6 is currently hanging out in the 100-500x slower than C range, though I see no fundamental reason they won't catch up to the current scripting languages at the very least once they have time to optimize. (Whether than can significantly exceed them I don't know; I don't even know that it's a goal, since the dynamic scripting languages are certainly plenty fast enough for a huge variety of tasks as-is and that will continue to be true indefinitely.) These languages aren't "stuck" there, it's just that it takes time to optimize.
And my final caveat is to point out that A: fast != good and slow != bad, it's merely one element of a rich and complicated story for every language and B: that while benchmarks always have a certain subjectivity to them, we are broadly speaking observing objective facts here than, in particular, engineers responsible for creating solutions for people really, really ought to know and not dismiss because they make you feel bad. Being "insulted" at the idea Python is meaningfully fundamentally slower than Rust or something isn't going to change anything about the performance of your system, so it behooves you as engineers to be sure that you've lined your requirements, resources and solutions all up correctly.
"These languages aren't "stuck" there, it's just that it takes time to optimize"
Another thing is that we have CPU architecture that is optimized for a one size fits all system when it comes to personal computing. If we truly wanted power from higher level, more expressive languages and programming systems, we would have architecture designed for those systems.
If you want to experiment with truly new programming languages and environments, you probably have to experiment with hardware too. Our present reality makes this difficult to change, which is really too bad.
But, you may have chosen to use an interpreted language for other reasons. GPs question still makes sense, relative to the other options s/he may consider (like go itself, or ruby)
Even Go, at 2x-3x slower than C, is already not a terribly great choice for true CPU-intensive loads. It's fast enough it can fit in some scenarios, but if you really start to ramp up you're going to want to switch to something else.
Edit: Would someone like to explain what is wrong with the idea that people who care about CPU-heavy computation also need to care about the performance of the language they are using rather than just downmodding it? I find the idea that in one second, someone cares deeply enough about CPU performance to call their load "CPU-heavy" and want to learn how to use many cores to process it, but in the next second is oblivious to the issues of using languages that are very slow on the CPU to do the work, to be a very bizarre shape of concerns about performance. It's like someone asking how they can move ten tons of something from New York to LA as quickly as possible, but insisting that they will only use bicycles to do it. The fact that you may be able to work out a way to do it with only bicycles, even perhaps surprisingly quickly compared to what one's initial reaction may have been, isn't going to change the fact that it sure is weird how one moment you're concerned about doing it as quickly as possible and in the next moment completely oblivious to the performance consequences of the chosen tools.