Crystal-lang has also been around with similar ruby syntax and features like green threads, and a larger standard library. Performance is at par with golang.
I believe this is partially due to the lack of incremental compilation, which is on the roadmap [0] for 1.0. The compiler itself builds in ~10 seconds. [1]
Interesting. The goal is "small and handy environment that mainly focusing on creating API servers or microservices". Having moved from Ruby to Go in the last year or two I'd be really interested in what language features they're pushing with that goal. I really enjoyed my work with Ruby, but I have found Golang pretty quick at these "backend" type work.
On the back of this, I dug around a bit around for other transpilers to/from Golang. So far:
Hi, I'm Stan, this project's creator. I will explain your questions later.
But before that please check out our sample site (written in Goby): http://sample.goby-lang.org/
When things settle down, I would love to read an article from you about what it's like to launch a new programming language by the end of the 2010s.
I'm under the impression that adoption is going way faster than in previous decade, when php, java and C/C++ were kings hard to revoke. But there's still a gap of few years between initial announcement of a language and it being used in the wild - which is quite understandable, because people want to be sure the language will stick around before writing production code with it.
I would love to know what you do during those years, how you grow your language, how you simply manage to use it at work, or for your own projects. If you could write such article, that would be awesome :)
Thanks! I'll try to share my experience to others by writing posts or giving a talk on conf. But actually Goby is just 6 months old, so you might need a while :)
I think Go does good work at concurrency support, so using it can save me a lot time dealing with concurrency.
And in my opinion Go is simple to learn, which can make others contributing it more easily. For example, this is only my second Go project, and most of Goby contributors haven't written any Go code before (of course I spent some time guiding them).
The last reason is that I think Go has a relatively big community and ecosystem. It'll be more easy for me to finding resources.
If you care about CPU-heavy computation, you care about not using an interpreted language to do it, because if you pay a 10x performance penalty, that turns your 16-core machine back into an effective 1-core machine. (Apparent number mismatch to account for slowdowns and general amdahl's law.) And, with no offense intended to Goby, a brand new scripting language built on top of a language like Go (already ~2x-3x slower than C in general) could easily see performance penalties of 100x or 500x vs C. (Think logarithmic here.)
Even Go, at 2x-3x slower than C, is already not a terribly great choice for true CPU-intensive loads. It's fast enough it can fit in some scenarios, but if you really start to ramp up you're going to want to switch to something else.
Edit: Would someone like to explain what is wrong with the idea that people who care about CPU-heavy computation also need to care about the performance of the language they are using rather than just downmodding it? I find the idea that in one second, someone cares deeply enough about CPU performance to call their load "CPU-heavy" and want to learn how to use many cores to process it, but in the next second is oblivious to the issues of using languages that are very slow on the CPU to do the work, to be a very bizarre shape of concerns about performance. It's like someone asking how they can move ten tons of something from New York to LA as quickly as possible, but insisting that they will only use bicycles to do it. The fact that you may be able to work out a way to do it with only bicycles, even perhaps surprisingly quickly compared to what one's initial reaction may have been, isn't going to change the fact that it sure is weird how one moment you're concerned about doing it as quickly as possible and in the next moment completely oblivious to the performance consequences of the chosen tools.
> "If you care about CPU-heavy computation, you care about not using an interpreted language to do it, because if you pay a 10x performance penalty, that turns your 16-core machine back into an effective 1-core machine. "
> "Edit: Would someone like to explain what is wrong with the idea that people who care about CPU-heavy computation also need to care about the performance of the language they are using rather than just downmodding it?"
You first talk about interpreters and then you talk about performance. They're not perfectly correlated. Given how modern implementations of languages are not either simple token processing state machines of the 1980s or simple compilers of the same period, this equivocation of yours seems out of place in the 2010s. A proper interpreter like LuaJIT can not only reach very decent performance on computationally expensive stuff (1x-2x of C run time in Scimark 2, depending on the particular test, for example) but also allow for delaying computation to as late a time as possible and then generating specialized code based on the increased amount of information available. That can be done not only even across modules (which static compilers still struggle with without some kind of link-time optimizations) but also depending on actual data at run time (which static compilers are completely incapable of, unless they're somehow embedded into the final application - an option that, e.g., Lisp programs can use if they choose so).
Is a JIT, not an interpreter. JITs are fundamentally different. I seriously doubt Goby is a JIT yet, to the point I'm not even going to check the source.
LuaJIT is also an outlier. I consider it a solid point in favor of the argument that if you build a language for speed from day 1, you can do pretty well and still build in a lot of nice features. Even so, I understand LuaJIT had to drop some Lua features to get there. However, if you first design your language's features with a lot of focus on convenience, and then try to make them fast without compromise, you end up in the PHP/Python/Perl/Ruby/Javascript space, where no matter how much work you put into it you hit fundamental walls. (Yes, even JS with all modern JIT'ing is not really that fast of a language.) The counterargument to your point is that LuaJIT is pretty much all alone in its position on the performance, despite the fact that other seemingly-similar languages have had orders of magnitude more work poured into their JITs.
I think there's a lot of up-and-coming languages that have learned a lot about designing for performance and while, alas, LuaJIT's future seems dim, I believe a lot of languages like Nim and Crystal and even to some extent Go have learned about how to be nicer languages than C or C++ while not giving up tons of performance. LuaJIT, in my opinion, still has a place of honor in the history of programming languages, far outsized from its actual use.
(Rustaceans may be assured I have not forgotten them, I just think Rust is coming at this from a significantly different angle.)
> Even so, I understand LuaJIT had to drop some Lua features to get there.
Not true. LuaJIT is complete Lua. The difference, besides being jitted are: having parts written in assembly, super optimisations and things like FFI which cannot be written in C89. Main Lua uses nothing but C89 which makes it run on almost anything so this is not the case for LuaJIT. Also vanilla Lua is way smaller and the source code is cleaner simpler. Divergence in the language is not because LuaJIT dropped some features but because it was created while Lua was on version 5.1. Now Lua is on 5.3 and LuaJIT didn't catch up on everything yet, it is basically 5.1 compatible with some sprinkles of 5.2.
Goby doesn't have JIT. It just created 6 months ago so we have many things prior to performance improvements. And I'm not a programming language expert so it's too hard for me introducing JIT in my first language.
> Would someone like to explain what is wrong with the idea that people who care about CPU-heavy computation also need to care about the performance of the language they are using rather than just downmodding it?
I didn't downvote you, so i'm not sure, but maybe it was because of the condescending tone (assuming what they care about from a simple question and telling them what they should care for instead), aaand not answering their question about parallelism in the first place.
I didn't assume, the question contained "CPU-heavy", and the answer is relevant because it's a common misconception that you can make up for a slow language with parallelization, but you can't. If you've got a CPU-heavy task, you will find in practice that even using a lot of threads you'll be lucky to get even a 3x speedup on clock time, unless your problem is 100% strictly embarrassingly parallel. (I've tried a few times, which is where the 3x comes from. It's all I was able to get and my tasks were very close to embarassingly parallel, but the cruel nature of Amdahl's law is that it takes only very slight non-parallel components to wreck your speed.) It's not a viable solution in practice.
Didn't downvote, but perhaps a request for clarification; perhaps you could cite some benchmarks for these numbers? Esp. The 100-500x speed loss for interpreted languages.
"Esp. The 100-500x speed loss for interpreted languages."
First, let me remind you that you have to think logarithmically here, not in absolute terms. 1-5x is the same sized range as 100-500x, about half-an-order of magnitude. (Pedants will correctly observe it isn't actually half as half an order of magnitude is actually 3.16... but it's close enough for estimate work.)
I use 50x slower than C as my guideline for how fast conventional 1990s dynamic scripting languages running an a conventional interpreter are, based on: http://benchmarksgame.alioth.debian.org/u64q/which-programs-... You can see Erlang, Perl, Ruby, Smalltalk, Python3, Lua (not LuaJIT, Lua, huge difference), and Ruby. You can find Node.JS with all the mighty power of a JIT'd language down on the next graph all the way on the right, hanging out somewhere around 10x slower than C, which seems to be all you can practically expect from a JIT'd dynamic scripting language; excepting my comments about LuaJIT in the cousin comment, I haven't see anything that convinces me you can go any faster for that crop of languages. Of course if somebody produces a 2x-slower-than-C Python JIT, I'll just update my understanding rather than insist it can't exist. But at the moment I see no particular reason to think that's going to happen.
You can also see Go at 2-3x slower than C just a bit to the right. (This is why I say Go is pretty fast for a scripting language, but you can see it's not all that fast when compared to the conventionally compiled languages. It takes a non-trivial loss on both not doing a lot of optimization, and requiring a lot of indirect vtable lookups when you use interfaces heavily which C++ often avoids and Rust aggressively avoids whenever possible.)
100-500x speed loss is just an estimate for a brand new, unoptimized scripting language... and, actually, it's a rather generous one, it could go another order of magnitude or two quite easily, especially in the very early days. Note how while that may seem extreme, it's just another order of magnitude or so slower than the optimized dynamic scripting languages. For an unoptimized implementation, that's not necessarily a terrible estimate. As I understand it, Perl 6 is currently hanging out in the 100-500x slower than C range, though I see no fundamental reason they won't catch up to the current scripting languages at the very least once they have time to optimize. (Whether than can significantly exceed them I don't know; I don't even know that it's a goal, since the dynamic scripting languages are certainly plenty fast enough for a huge variety of tasks as-is and that will continue to be true indefinitely.) These languages aren't "stuck" there, it's just that it takes time to optimize.
And my final caveat is to point out that A: fast != good and slow != bad, it's merely one element of a rich and complicated story for every language and B: that while benchmarks always have a certain subjectivity to them, we are broadly speaking observing objective facts here than, in particular, engineers responsible for creating solutions for people really, really ought to know and not dismiss because they make you feel bad. Being "insulted" at the idea Python is meaningfully fundamentally slower than Rust or something isn't going to change anything about the performance of your system, so it behooves you as engineers to be sure that you've lined your requirements, resources and solutions all up correctly.
"These languages aren't "stuck" there, it's just that it takes time to optimize"
Another thing is that we have CPU architecture that is optimized for a one size fits all system when it comes to personal computing. If we truly wanted power from higher level, more expressive languages and programming systems, we would have architecture designed for those systems.
If you want to experiment with truly new programming languages and environments, you probably have to experiment with hardware too. Our present reality makes this difficult to change, which is really too bad.
But, you may have chosen to use an interpreted language for other reasons. GPs question still makes sense, relative to the other options s/he may consider (like go itself, or ruby)
I agree with you. Elixir can help in the web side of things but tasks/non-web in Elixir don't help. I really like Go's offering, but it gives me 4x the pain in developing a solution that ruby does (eg: parsing anything, etc). I would love to simply use a compiled, optimized ruby (like Crystal)
That's one of the reasons I migrated to Go instead of Elixir (plus, I love the idea of suddenly having system programming within reach). But doing so, I haven't explored deeply enough Elixir and Crystal. Would you say they each provide something, not related to taste, that Go doesn't have (or that Go is doing worse)?
I'd compare Go to Crystal. Go is more mature and stable but crystal a little faster thanks to LLVM. Plus I like Crystal's syntax better. I've been contributing to the crystal ecosystem with some libs and use it for personal stuff but I use Go at work. Once Crystal hits 1.0 and has good parallelism then I think it could be used as a drop-in replacement for Go for most use cases.
Just call it "Gouldby". That's an English surname, and can be short or "Go would be Ruby". (Which doesn't entirely make sense, but names don't entirely have to!)
Unless you explain the motivation behind yet another language, it's hard for people like me to like a new thing. How is it different from Ruby? Just concurrency support? How is it different from Go? Just ruby syntax? Is the language statically typed or dynamically?
Additionally I would like to see some syntax examples at the start as well.
What? This is literally the first paragraph of README.md:
Goby is an object-oriented interpreter language deeply inspired by Ruby as well as its
core implementation by 100% pure Go. Moreover, it has standard libraries to several
features such as the Plugin system. Note that we do not intend to reproduce whole of
the honorable works of Ruby syntax/implementation/libraries.
One of our goal is to provide web developers a sort of small and handy environment that
mainly focusing on creating API servers or microservices. For this, Goby includes the
following native features:
- Tough thread/channel mechanism powered by Go's goroutine
- Builtin high-performance HTTP server
- Builtin database library (currently only support PostgreSQL adapter)
- JSON support
- Plugin system that can load existing Go packages dynamically (Only for Linux by now)
- Accessing Go objects from Goby directly
This makes it quite a match to do containerized things as a single binary || self-deploying Go apps[0]
>One of our goal is to provide web developers a sort of small and handy environment that mainly focusing on creating API servers or microservices
How does it explain the motivation behind creating another language? What were the reasons the developer/team behind goby thought existing languages/tools didnt help? My questions are also about how different goby vs ruby is, or goby vs go. I also question if the language is dynamically or statically typed, etc which are bare minimum I expect in introduction.
If you ask me why I started this project at first place, I'd say it's just for fun and practicing.
And we know that Goby is not special enough by giving green thread or channel. So currently our main goal is to let users use Go's packages and manipulate Go objects directly in Goby.
http://crystal-lang.org
PS: Notably, websocket support is still lacking in Goby.