In general, all those features have been offered by compiled languages for years (even if they might've just been compiler warnings).
What makes Go so special other than the lack of deep class hierarchies?
Sure, it's more "reliable" by design than 100% of those dynamic languages out there, but that doesn't really make the language more attractive. Coming from an OO world, I find Go's "inheritance" model far inferior to what, e.g., Haskell or Scala offer.
Given the comments here I think that I can finally understand somewhat why Go is taking off the way it is, but it feels like a faulty "solution".
I think most people who are pleased with the reliability of Go are coming from something like Python where unforeseen things pop up out of nowhere on rare code paths nobody anticipated. Things like a rare error handling path where some piece of code iterates over a string the author thought was a list, etc.
Or perhaps they are coming from C++ where use-after-free or use-of-uninitialized or use-of-perversely-initialized are common at runtime. Go has none of that.
Or java/c# where exceptions tend to bubble all the way up the stack and where you have to to try/finally which can be harder to write safe code with than defer.
Or java/c# where exceptions tend to bubble all the way up the stack
There is really no difference. In Java you have to decide at what level to catch exceptions. In Go you have to decide at what level to handle an error and just not return it. In fact, if you have to pass an error on level further up the stack, it's easier in Java.
have to to try/finally which can be harder to write safe code with than defer.
Why? I suppose that you are referring to the fact that you can put a defer right next to e.g. opening a file. But Java has the (arguably) nice(r) try-with-resources construct since Java 7:
I'm not a Java hater, I love it, but a lot of code tends to be written as just "throws IOException". I've written a lot of code like that. Go forces you to handle each error separately, which can be nice.
Both the original post and comments such as mine have made no mention comparing Go to other languages. We're just saying that Go itself is reliable. Not that others aren't or that Go is more reliable than others.
The Go love makes a lot of sense when you consider it's out to replace Python and Ruby, and not other statically typed compiled languages. If you learned programming via Python, and then moved to Go, it would seem revelatory. If you've previously done a lot of statically typed work, it's much more meh.
I didn't even say 'bad,' I said 'more meh.' Furthermore, I didn't say that liking go makes you hate other languages, or that you should hate go if you like other languages...
I didn't say "bad" either.................. I said "awesome", which is in stark contrast to "more meh"................
> Furthermore, I didn't say that liking go makes you hate other languages, or that you should hate go if you like other languages...
I didn't say you did. My implication was that people don't have to fit your mold of the world: I enjoy programming in many different kinds of languages. i.e., I liked Go at the outset even though I had previously done significant work in a static typed language with a rich type system.
I'll counter your anecdotal evidence with my anecdotal evidence. I know a few systems guys who have three or more decades of experience with statically typed languages who prefer Go. :-)
> Given the comments here I think that I can finally understand somewhat why Go is taking off the way it is
The comments here are incredibly biased in Go's favor, for whatever reason. On other forums it does not get the same spotlight, for instance it has been mostly dismissed on reddit and is rarely even discussed there at all (even on it's own subreddit).
a isn't necessarily the case. Consider fmt.Println:
func Println(a ...interface{}) (n int, err error)
I've never seen any code that handled err -- I don't mean ignoring it with _, I mean not even acknowledging its existence -- without the compiler saying anything.
The point is that I can't immediately tell which errors will be mysteriously ignored. This, together with the existence of panics, undermines the stance that one can tell exactly what a certain piece of Go code will do just by looking at, say, a function prototype.
Don't be fatuous. If you wrote all of the code, you should be able to tell exactly what it does. If there's some code you're calling that was written by someone else, you have to trust it to do the right thing. Exceptions versus return codes doesn't change that fact.
People usually don't check the return code of fmt.Printf because they simply don't care. Most of the time, you're just logging to a terminal, which basically never fails. And if logging did fail, you wouldn't want to abort the program because of it.
The only time you might want to check the return value of printf is if you're writing a simple utility designed to transform something passed on stdin to something on stdout.
How am I being fatuous? It is obviously false that there is any sort of guarantee that errors have to be dealt with if the compiler allows some indeterminate number of functions (unless there's a list somewhere) to be exceptions to the rule.
If your program is expected to operate in a pipeline, as most UNIX programs are, shouldn't you always check for printf returning an error? For example:
I'm not sure, actually, what's the proper way to do it?
For fun, I wrote three programs that just print
1
2
3
...
to stdout forever, in python, go, and plain c. None do any error checking.
Here's the results:
$ py printcount.py | head -10
1
2
3
4
5
6
7
8
9
10
Traceback (most recent call last):
File "printcount.py", line 3, in <module>
print i
IOError: [Errno 32] Broken pipe
$ go run printcount.go | head -10
1
2
3
4
5
6
7
8
9
10
signal: broken pipe
$ ./printcount | head -10
1
2
3
4
5
6
7
8
9
10
$ pgrep printcount
$
Interestingly enough, only plain old C actually did what I wanted here. In Go and Python, we'd have to do a little work to not print the annoying error. But I don't think we want to check return codes from print in any of the languages.
Somewhat ironically, that is exactly the same philosophy that was embodied in Java when it came out, and Java has grown so that it's now mentioned in the same breath as C++ when compared to Go.
I think what happens in all languages - indeed, in all engineering projects - is that people start using them, and then their design collides with the real world, and people start asking for new features to make their own lives better, and each additional feature makes the language incrementally better without costing much. But when this has been going on for 20 years, you have an accretion of features that don't quite hang together and collectively are a royal pain to deal with. Eventually enough folks get fed up with this state that a new language can gain critical mass, and the cycle starts over again.
The only languages immune to this are ones that explicitly reject certain use-cases as out-of-scope, and can design accordingly. C, for example, remains a great little language because it doesn't pretend to shield you from the dangers of the hardware, and so it can remain both fast and simple because everyone who wants it to scale up to big complicated applications reaches for C++ or Java instead. Python remains great as long as you remember that it's a scripting language and don't pretend that it'll be fast or maintainable beyond a certain size.
But when this has been going on for 20 years, you have an accretion of features that don't quite hang together and collectively are a royal pain to deal with. Eventually enough folks get fed up with this state that a new language can gain critical mass, and the cycle starts over again.
I agree, but Java the language hasn't become much more complicated. In fact, it's development has been quite glacial, one of the reasons that some JVM-minded people switch to Scala.
Probably the larger problem is that Java (to the Python/Ruby and now Go crowd) became associated with applets and big and ugly frameworks that require heaps of XML. However, Java the language doesn't differ all that much from Go. Java is better in some respects (generics, checked exceptions) and Go is nicer in some respects (closures). However, Java has an insanely great runtime (JVM) that can be fully instrumented, supports hot code swapping (some natively, DCEVM, JRebel). It also has a zillion great libraries, good IDEs, great web frameworks (that don't require writing XML), etc.
Go will be popular in start ups, etc. Some people will take it into big corps (as happened with Python and Ruby). But to be used widely in the industry, it needs everything and more than the JVM or .NET offer.
The comments are almost ancillary, but if I'm reading the original correctly it is saying that Go itself -- the compiler, the garbage collection, and the "standard library", is incredibly reliable. Which has been my impression as well -- after years with a number of other languages and platforms, my first impulse when hitting an unexpected issue is looking into the platform/compiler/etc, to discover those edge conditions that I need to navigate around. Go, however...when something doesn't work as I expected, at this point I have an extremely high confidence that it is something I've done wrong. That is really unexpected in such a young ecosystem.
It seems like you missed a lot of the other things he said, such as "testing is built into the language," and "binaries are statically compiled and will work on any reasonable kernel." These may not be sexy features, but they are features that you need to actually make reliable programs.
And frankly, yes, Go's approach to handling errors is a Big Deal. It's counter to the tendency of the last few years to move more and more error handling into exceptions. In retrospect, we can see that that was a mistake, similar to the "natural language programming" dead-end that led to COBOL and HyperCard. Challenging the exception-handling orthodoxy is absolutely something interesting and new.
What makes Go so special other than the lack of deep class hierarchies?
This is an "other than that, Mrs. Lincoln, how was the play?" type of comment. Go's type system is absolutely huge in terms of making it special. It would never have attracted the attention it did if it were just another Modula-3 rehash like Java, C++, D, etc.
> And frankly, yes, Go's approach to handling errors is a Big Deal.
It's not.
> It's counter to the tendency of the last few years to move more and more error handling into exceptions.
Haskell says hi. Go's error handling is a step back not because it tries to avoid exceptions but because it's a crummy very slight improvement over C error handling. Contrary to what you believe, the world has moved on since then, even outside of exceptions-based error handling.
> Challenging the exception-handling orthodoxy is absolutely something interesting and new.
It absolutely is not new, and there is nothing interesting to the way Go did it (which is a significant downgrade of Erlang's way of handling errors)
> Go's type system is absolutely huge in terms of making it special.
You have not answered the question.
> It would never have attracted the attention it did
Of course it would, the attention it attracted is due to its backers, not to its intrinsic qualities (of which it has few, lost in a morass of decisions which would have been bad 30 years ago).
I work with Go on a daily basis (and it is my own choice), and I would like to offer a slightly different opinion than most people on this thread.
Go is reliable, fast, and its main benefits are simple concurrency, a good, well written standard library, and a blazing fast compiler. Moreover, the Go tour is very nice, and it gives enough material to any programmer to start writing short programs using http, images, channels,... in only a few hours.
This said, very often, using Go feels backwards to me. It could be because of I have personal gripes with the package management system, but I easily ignore those cosmetic things when I'm getting stuff done, so it's not.
After getting used to type systems like the one Haskell has, Go feels very primitive. Moreover, the lack of generics and support for functional programming makes my programs longer, less readable, and less friendly for teamwork. Even though I still sell Go to my friends as a replacement for most of the scripting languages they use, (Typed) Clojure and Haskell are languages that are way friendlier to me.
I would take Go over C or C++ any day, simply because it would allow me to write relatively readable programs in less time, but if the task fits within what Clojure or Haskell does right, I would not hesitate for a second.
As of today, I primarily use Go for infrastructure work, and it really shines there. I am going to replace a lot of Python code with it, and it's not necessarily for the performance, but for the ease of deployment, the reliability, the fact that Python developers can read it, and the static typing. That is to say I don't disagree with the author, but I want to provide another point of view.
The language I believe will deliver what I hoped with Go is Rust. I do not believe, however that it is ever going to reach the hype level that Go has, but it seems more promising to me.
++to both sides of this, though I think I weight the positives a little more heavily than you do. The stdlib, compiler, and ecosystem stuff like the tour/godoc/playground matter more than they get credit for (the value of any language can't be determined just from its spec), and that hype helps get libraries written, tools improved, etc.
I also share the sentiment that there's one set of gripes that are annoying but don't actually prevent getting stuff done (if I'm reading your comment right). I wouldn't mind C++-style const params, some way to briefly relax the (usually quite useful) fatwa against unused names in the heat of writing/testing code, a concise built-in non-close() way to broadcast to everyone waiting on a channel, etc. But, point is, none of that really stops or substantially slows a project down. And Go has other, almost complementary, quirks I like, like expressions at the toplevel or the prohibition on discarding results of certain functions or constants that are just numbers or...
I really hope that by Go 2 there's a better way to write containers or functional tools. Right now leading gophers occasionally argue that it'd be bad to extend Go until your code never has to repeat itself, and sure, you can get by with what Go has now. But that all glosses over that there's a substantial cost in expressiveness to some things Go's missing. I'd even accept something like parameterized package imports (import intset "container/set<int>" loads package set<T> substituting T for int). Those would be "Go-y" in that instantiation would have to be explicit and the type system would be unaffected (both unlike C++), though it'd sacrifice some power/brevity. Pike also offhandedly mentioned polymorphic functions in his "Less is exponentially more" post a while back, and I'd take those too. :)
Yeah--it looks like it does it about as well as a third-party tool can. But it's the sort of thing it seems like could hugely benefit from being in the core--then hopefully folks would be more inclined to write libs for B-trees or whatever on top of it.
Mostly irrelevant, but to put a finer point on the wish for something to show up in the core: I totally sympathize with the Gophers' decision to ship 1.0 without generics (or polymorphic functions or whatever); there are workarounds for not having 'em, and there's plenty else to keep the core team busy, like writing an awesome stdlib. Other languages that now have generics also lacked them at first. But it sort of grates to hear it waved off as simply unimportant that you can't deploy a set or B-tree or whatever of a new type without either cut-and-pasting (by hand or w/a third-party tool) or taking the runtime hit of dynamic typing. That's a solvable, useful-to-solve, and repeatedly solved problem, and seems eventually worth solving in the Go core, even if it's not the highest priority. Most of Go's omissions/quirks I either actively like or just shrug at, but that's one on my wishlist. End rant. :)
You mean a trick to use cpp preprocessor to implement template expansion when C++ (or common compilers) didn't have templates?
Did this technique just expand the classes every time they were used (in the header file, just like C++ templates) or generated files for each concrete type parameter which was then just compiled once (like gotgo) ?
Fair enough--I'm just saying I actually wouldn't mind if the actual eventual solution in Go were a bit gotgo-like (though I'd like it to be standard, not require an extra build step, etc.). Making adaptable containers without extending the type system might even be the Go-y way to do it, though it probably wouldn't win many converts from other languages.
(Or, from the other end entirely, I could see polymorphic functions happening--they'd complicate the Go compilers more than macro-like generics would, but using them would feel familiar for Go's many refugees from duck-typed dynamic langs.)
Anyhow, I'm fairly aimlessly spec'latin' here, don't listen too much to me.
At Errplane we've written our primary datastore in Go and have been running it in production since January. It has been totally reliable, performant, and easy to work with. Easily handles the 100m requests per day we get. At this point it's my language of choice for any back end systems.
Sure! It's really just a Go wrapper around LevelDB. LevelDB is an in process database (like SQLite) written in C++. It's a key value store where the keyspace is ordered (so you can do range scans).
However, we've written enough code around it to turn it into a scalable distributed time series database with a pubsub layer built in. It does rollups on the fly or can preemtively downsample data coming in. We're actually working on a new version of it that we intend to open source at the end of the year.
That sounds pretty useful, something I would love to see open sourced as I've thought about making a distributed LevelDB based DB as well.
Any word on how it handles the distribution? Are we talking a quorum based system like Cassandra/Dynamo or are we talking more like a master / key range type system like HBase.
The version we're working on now uses the go Raft implementation to handle cluster configuration. Then each server periodically (< 30s) performs read repairs to ensure that it has the latest writes for its spots in the hash ring. Writes can either be done via either write once or quorum, with write once being the most often used thing. Since most of the time this data is in aggregate, we don't care about losing a few writes.
We've been using Go in production for 4 or so months at customer.io and have been tremendously happy with the performance. My cofounder will rewrite some core function in go, deploy it to prod and get a 10x improvement. We are total converts.
Unfortunately, by insisting that all data start in a zeroed state (as opposed to forbidding the use of uninitialized data, as Rust does), Go makes the same "Billion Dollar Mistake" that language designers have been making for decades:
> And Dijkstra considered goto harmful. He wasn't right either.
He was right. The kind of goto he specifically and clearly elucidates in his paper has utterly vanished from modern programming.
He's not talking about Torvalds using a little `goto` to jump to the error-handling at the end of a function. He's talking about unstructured programming: gotos that jump across procedure boundaries, or where "procedure boundary" isn't even a meaningful concept.
Hoare only gave this talk in 2009, whereas Go development started in 2007. Did Hoare express this opinion publicly (or privately, to the Go developers) before then? I can't easily find any indication that he did.
I can imagine goto makes region analysis hellishly more difficult, and region analysis is what gives Rust the ability to statically reason about reference lifetimes, which makes it practical to remove null references from the language.
So on the Rust view, goto is even more antithetical to safe code than on Dijkstra!
It's been hashed out over and over again on the mailing list. Their stance boils down to language design cohesion. In particular, avoiding null in your type system while preserving orthogonality in your language design would require a completely different language than the on that Go embodies. (This is in stark contrast with say, the Haskell community, which will add any new feature produced by original research.)
If you dig through the mailing list posts, avoiding null in the type system can be difficult to reconcile with "zero values", which each type has. More subtlety, avoiding null in your type system might require adding support for sum types, which conflicts in weird ways with Go's interfaces.
I think a lot of people tend to assume that null should always be eliminated from type systems because doing so can be done for free. But I disagree that it can be done for free, and my evidence is in the aforementioned mailing list posts.
The other reason why I suspect we'll never see the elimination of null in Go's type system is that it just isn't that big of a source of bugs. Anecdotally, I very rarely see my program crash because of a nil pointer error, even while doing active development. I could hypothesize as to why this is, but I'll just leave you with this thought: the type system isn't the only thing that can reduce the occurrence of certain classes of bugs.
I never found the conflicts with interfaces issue that Russ Cox raised to be convincing. Just require that the sum type be destructured before calling methods on it. This is what Rust and Scala, both of which have something akin to Go's interfaces, do.
The bigger issue is that having a zero value for every type does indeed conflict with not having null pointers. I feel that pervasive zero values box the language design in so heavily that the convenience they add doesn't pull the weight, but Go's designers evidently disagree. In any case, Go's language semantics are so based around zero values (e.g. indexing a nil map returns zero values instead of panicking for some reason) that they can't be removed without massive changes to the entire semantics.
> We considered adding variant types to Go, but after discussion decided to
> leave them out because they overlap in confusing ways with interfaces. What
> would happen if the elements of a variant type were themselves interfaces?
It's not that there isn't a way to implement sum types in Go, it's that they aren't happy with how they interact with interface types. (That's how I interpret it, anyway.) I'm sure you and I can conceive of reasonable ways for them to coexist, but simply coexisting isn't enough for the Go devs.
> The bigger issue is that having a zero value for every type does indeed conflict with not having null pointers.
I agree that it is indeed the bigger issue. Personally, I make use of the "zero value" feature a lot and very infrequently run into null pointer errors while programming Go. So that particular trade off is clear for me. (I certainly miss sum types, but there is a bit of overlap between what one could accomplish with sum types and what one can accomplish with structural subtyping. Particularly if you're willing to cheat a little. This alleviates the absence of sum types to some extent. Enough for me to enjoy using Go, anyway.)
That FAQ entry and associated mailing list posts was what I was referring to as unconvincing. I don't see how sum types and interfaces are in conflict: just require destructuring before calling interface methods, as Scala does.
He considered using goto instead of structured programming harmful. Which was right. People love to use that article like a strawman, ignoring the actual content and context.
At a GoSF meetup audience member asked if Go was reliable...
- Op was surprised because he had been using it in production for nearly 3 years.
- Previously used it at a Financial Company (wholesaler for institutional share traders and stock brokers).
- Op used it previously for system Monitoring, as a data store and messaging middleware.
- Go's adoption is gathering pace thanks to the terse syntax, straightforward powerful standard library, excellent tooling and concurrency primitives. [Sic. And performance]
- Go's core development team a reason for why it is showing maturity beyond it's years.
- Op changed positions so he could work on Go full-time.
Question: is this an automated summary or are you doing these summaries yourself manually?
On topic, it seems odd that a Python process that would take 3 minutes to scan nodes could be reduced to a go script that takes under 1 second. Perhaps this is entirely due to going from serial to parallel, but the timeout thing that he mentioned sounds like an error in the Python code rather than a language problem. And I say this as someone who uses/enjoys go.
Indeed.. I would have liked to hear what took the Python script 3x longer. The longest part I would assume would be the network requests themselves which is inherent regardless of language.
Sounds like it the execution time required to processing the vast amounts of data produced. go is quite likely to beat python in compute bound or data crunching tasks.
Go is definitely faster than Python for those types of tasks, however I'm suspicious of it being that much faster. I've crunched massive data sets in Python and as long as you're aware of its limitations, you can make things pretty zippy.
What's notable is that no one has yet posted here saying, "Go stunk for me. Perf and reliability were both awful. I went back to blahblah and threw away all my Go code." It's usually easy to find detractors of any technology.
I actually have seen complaints about Go's performance relative to other compiled languages, like C++, D and Java. I don't think it's coincidental that most of the Go converts come from a dynamic-language background. If you're porting from Ruby, 2-3x slower than Java still seems blazing fast.
For me, Go misses the sweet spot entirely. A year ago I was curious about both Go and Clojure, my interest piqued by the strong concurrency support each has. I picked Clojure and have grown more and more confident in that choice.
Why not Go? In a language with nil pointers (Tony Hoare's "billion-dollar mistake"), no generics, and sub-JVM performance, static typing just doesn't seem worth it. Also, poor interop with other languages means Go can only draw from its own small community for libraries (contrast Clojure which makes using mature, well-tested Java libs trivial).
When Rust is more mature, it may be what I hoped Go would be: the safe language, with lightweight threads, that makes me never have to write C or C++ again.
Go is interesting and I really enjoyed channels, but no generic programming really disappointed me. I guess supporting the generic paradigm is pretty tricky (C++/Java templates/generics aren't dear to my heart), but not being able to implement map/filter/reduce functions...
I just did a prototype in Go. Python is my normal go-to language for prototyping. I concluded that for prototyping, I'm going back to Python.
My main objections were neither performance nor reliability (I don't care about either while prototyping), but developer productivity. I find there are just so many programming conveniences in Python that I wish I had in Go - list comprehensions, negative slice indexes, for- and with-statements over user-defined data types, heterogenous dictionaries, easy duck-typing. When I just want to get some ideas working to prove a concept, it's really annoying to flesh out all your types and error handling in full detail.
I do think Go would probably make a pretty nice production language, but I'm not at liberty to convert the large C++ and Java systems I work with to Go. Even if I were, I think I would prefer C or C++ because of its easy Python bindings.
I don't need concurrency, and my app (which is a fairly simple "munge some data from backends, apply some algorithms, and template it" webapp) seems easier to reason about without introducing them.
Except that you do. You've just said it yourself! What the for command does in Python is essentially running two communicating coroutines, passing control between them. Yield-using generators actually make it very explicit, by creating special resumable stack frames. The only difference is that in Go, you're not limited to a single stack frame in the producer, nor do you have to use a loop or the new PEP-380 "yield from" syntax (for example, if you're iterating over a tree). The blocking channel takes care of the execution sequencing. (Also, in Go, the channel makes it possible for the consumer to use multiple stack frames, if you're re-building another data structure instead with the data you receive from the producer, but that seems to be a rarer case. It's a nice symmetry, though.)
The Python equivalent of two communicating goroutines is a bidirectional generator (PEP 342), not a for-loop. I try to use the language construct with the least power.
Basically what I wanted this for is that I have a user-defined type that is basically a list of some other user-defined type, plus some extra stuff. In Python it's trivially easy to implement __iter__ and delegate, and then I could manipulate these objects the way I thought of them, as containers for other objects. In Go...I wonder if I could've used type embedding (making the first member of A be []B), but I don't think the built-in language constructs respect embedding when figuring out what's a legal expression. I ended up biting the bullet and explicitly looping over the slice member, but the point of prototyping is being able to think of your code in the terms of your problem domain.
I think that's because Go is reasonably good for, say, 90% of situations, and if you're in the 10% where it is not a good fit then you already know that and you know better than to try using Go in that situation, and thus few people find Go stinking.
On the other side is things like Python, which are reasonably good for a much lower fraction of things (e.g. 30%), but if you're in the 70% you very well might not realise it does not suit your situation and thus experience Python stinking.
The thing is, it is not a bad language. It's also written by great engineers, who deserve their place in history. My problem with it (and this is the critique you see very much outside the HN circles, where Go is not a bit of a hype), is that it doesn't offer much over the state of the art of twenty years ago (Java), which wasn't exactly state of the art either ;).
I wholeheartedly agree with this article. I've had great success with Go's reliability.
Another important point to make is that when programs written in Go do fail, it tends to be trivial to figure out why. There's no type hierarchy like most OOP languages and no method_missing magic like Ruby. I can typically just trace the logic backwards from the error to the cause. Most bug reports I receive for my open source Go projects get fixed in minutes.
I suspect it's Ruby (dynamic typing, syntactic flexibility, etc.) rather than OOP itself that's the root of the issue when it comes to tracing logic backwards.
I'd seen a couple of references to Go being used by financial companies in London (both for prototyping and in live systems) and this post seems to provide yet more confirmation. Is finance in particular looking to be a forthcoming hot market for Go due to its balance of performance and developer friendliness?
I have both production systems (ircrelay.com) and well-used client software (packer.io) written in Go, so I'd like to comment on this from both perspectives, server-side and desktop-side.
First off, both from the get-go have been extremely reliable. IRCRelay has a custom IRC router that is able to route IRC connections to their proper bouncer server, then just stream the contents to and from. This server was written once, and has been running in production for over a year without a single downtime incident or crash incident.
Likewise, Packer reliability has been astounding, even to me. It really rarely crashes, and when it does it is because I'm usually skirting around the type system of Go (casting).
The PRIMARY reason Go is so reliable is also a reason many people hate go: you _have_ to handle every error. There are no exceptions, you know if a line of code can fail or not, and you have to handle it. This leads to a lot of "if err != nil" checks, and it can be really, really annoying sometimes. But the truth is: once you get that thing into production, it is never going to crash. Never. MAYBE nil slice access, but that's about it. Note the compiler doesn't actively enforce this, but it is Go best practice to handle every error.
(Pedantics: yes, you can assign errors to "_" variables to ignore them or just not accept any return values, but this is very actively discouraged in Go and it is one of those you're-doing-it-wrong kind of things)
Also: a compiler. Coming from Ruby, having a compiler is just amazing. Obviously compilers have been around forever and Go's isn't special in any way. But after living in a dynamic language heavy environment for many years, I can't imagine going back to not having a compiler. It just makes writing new features, refactoring old ones, etc. all just so easy. It catches all the bone-head mistakes that cause a majority of crashes in code I previously wrote.
Another reason: Testing is heavily encouraged and baked right into the official build tool. This makes it so easy to write tests that you always do. It isn't really an opt-in thing, because it is one of those things you just DO if you write Go. This makes it so that even with explicit error checking and the type system, you're fairly certain your logic is reasonably correct as well.
And finally, you're statically compiling your applications. Once they're running, no external dependencies can mess that up. Dependency management from _source_ form is a problem with Go, one that is actively acknowledge and being worked on. But once that program is compiled, it is safe forever as long as you run it on a proper kernel.
So my experience is anecdotal, but there are real language features and community ideologies at play here that make Go a really stable language to write programs in.
> The PRIMARY reason Go is so reliable is also a reason many people hate go: you _have_ to handle every error. There are no exceptions, you know if a line of code can fail or not, and you have to handle it.
> Pedantics: yes, you can assign errors to "_" variables to ignore them, but this is very actively discouraged in Go and it is one of those you're-doing-it-wrong kind of things
The _ thing is fine because it's visible, but what's potentially insidious is the compiler having no complaints about:
I do like Go a lot, but this is a valid asterisk on talk of the safety of its approach to errors.
---
For non-gophers: The reason this doesn't come up all the time is that functions often return multiple values, at least one of which you actively need, at which point you are forced to do something with any error that comes along for the ride.
True, but I consider this the same idea. If you look up the API of a method and see an "error" return type, it is your responsibility to handle it, and you're very encouraged to do so.
It won't compile as `os.Create` returns `(file, error)`. So you must either:
// Proper error handling
f, err := os.Create(fname)
if err != nil {
log.Fatalf("Could not create '%s'", fname)
// Well may want to do something other than crash.
}
defer(f.close())
If you ignore the error it's obvious
f, _ := os.Create(fname)
defer(f.Close())
It's like doing this in Java
File f;
try {
f = new File(fname);
} catch (Exception e) {
// Ignore Exception
}
Possibly worth pointing out to non-gophers: in `Go` it is illegal to declare an unused variable.
So if you write `f, err := os.Create("/bad/path");`, and that is the first time `err` has been declared & used in that scope, you can't just let `err` go unused or your program won't compile.
The PRIMARY reason Go is so reliable is also a reason many people hate go: you _have_ to handle every error.
If the compiler isn't enforcing it, you don't have to do anything. At best you can say the language suggests it, but if there isn't even a warning I don't see how that's different from pretty much every typed language.
This is actually enforced [in some cases] as a compile-time error. (BTW the `gc` family of Go compilers don't have warnings.)
The two errors in particular that help trap this are:
1. All declared variables must be used.
2. All return values must be assigned to a variable.
(Go allows for multiple return values from a function.)
---
Idiomatically one would store [one or more] return values from a function with a short-form declaration (`:=`) which declares and assigns the LHS to the RHS.
`a,err := someFunc()` where `someFunc()` returns `(SomeType, error)` declares and assigns `a int` and `err error`.
If you declare `a,err := someFunc()` and do not use `a` or `err` elsewhere in scope, your program simply won't compile.
The only way to squelch that error is to either use the variables OR you can use a `_` on the LHS. (Which makes an "assignment" to a blank identifier, e.g: discards the value.)
---
An example of where this _wouldn't_ be enforced by the compiler is if you've declared [and used] `err` elsewhere in the scope and you are _reusing_ the identifier.
For e.g: if you declared, assigned, used, and reassigned `err`. The compiler won't force you to use it _after_ the second assignment, so that second assignment could go untrapped with no complains from the compiler.
If you call a function that returns nothing but an error and you ignore the return value, the Go compiler does not complain and you will ignore the error. Go doesn't always force you to handle errors.
Also, Go has exceptions: panic and recover. They are idiomatically not used in the same way, but it's not true that you know by looking at Go code whether each line will fail, because lots of language constructs can implicitly panic. (You gave an example of one: indexing a nil slice.)
One major difference between panic and exceptions is that you can only catch it on a function exit boundary with defer and recover. There is no "try".
So really, you should think of panics more like Erlang's error handling, where a segment of the program (bounded by "recover") will just stop running, but can initiate the process of self repair.
The difference is that Erlang has process isolation: a panic is guaranteed not to mess up the state of any other process, because there is no shared state. This is what makes reliable self-repair possible. In Go, there is one giant shared mutable heap shared between all goroutines. So there is no guarantee that a panicked goroutine left the heap in an orderly state such that self-repair is possible.
Erlang actors and goroutines are very very different.
True. Although, panics do run defers, so with good coding practise it's possible to ensure that either you don't share things (but rather pass them around over channels) or things like locks are safely cleaned up.
But seriously, it's not just possible to be memory safe in Go, it's fairly easy. You basically have to remember to never lock without defer mutex.Unlock() unless it can't possibly panic, and to share data via channels such that it only ever has a unique "owner".
Personally I'd have much preferred full isolation, immutable data with threadsafe mutable references, and Clojure style "persistent" collections which can be treated as values. But Go didn't go down that route.
Sorry, note my "pedantics:" note on the bottom. If you ignore errors then you're actively going against the idiomatic way to write Go. I never claimed it was compiler-enforced. I'm not arguing for or against either side either (compiler or non-compiler), I'm just trying to state facts.
It is true that panics exist, but panics generally represent non-recoverable errors that deserve to crash. They're very rarely bugs (except slice access, which I mentioned). In practice, in any Go systems I've written, panics have not been an issue and have really never occurred (except, again, slice access, which I mentioned).
Regarding your comment below (I can't comment because it is too deep): where did you quote me saying the compiler forces you to handle errors? It doesn't, and if I said that it is an error.
I don't know how people keep misreading my comments, so in all caps: THE GO COMPILER DOES NOT ENFORCE ERROR HANDLING AND I NEVER CLAIMED IT DID (in any of my comments).
As frou_dh points out below, Go does not force all errors to be handled (edit: to be clear, they don't even have to handle it with _ ). See this working example in the playground for reference: http://play.golang.org/p/IPDTG6Ub2b
Edit: The comment that you replied to said "Go does not force you to handle errors." Your comment said "True but it's not idiomatic." My reply was meant to show that it's very easy to ignore errors in code that is not obviously un-idiomatic. Also, note that in your grandparent comment, you stated, "The PRIMARY reason Go is so reliable is also a reason many people hate go: you _have_ to handle every error." In my link, however, the error goes unhandled.
I disagree. It's not idiomatic in, say, Python, to catch every possible exception, only the specific ones that the programmer thinks are likely to occur in practice.
I'd love to see a "go vet" check or the like that complains if you implicitly discard an error return (but can be silenced by explicitly assigning it to _).
That still gives you a way to do odd things, but you'd have to explicitly choose to be odd. And since it's a check rather than a new language rule, old code still compiles.
As I don't program in Go this might be wrong, but doesn't the programmer have to tell the Go compiler to ignore the error by using the _ variable thing?
In that sense Go is still forcing the programmer to handle the error and it is the programmer who is deciding to ignore it.
I can see making the programmer do nothing more than mark the error as ignored still forces the programmer to think about the error and as such adds some value.
Um, not really. There's a working group that is gathering data, exploring options, and intends to present a plan early next year.
Right now what we have is very minimal. The current tools let you manage versions your own way. If we built something more complex we would be stuck with it forever. We're being conservative because getting it wrong would be worse than not providing it at all.
I have no idea where you get that impression from. Most people who hear about that work say something like "Oh great! I'm glad people are thinking about this, and look forward to seeing what comes of it."
There are a couple of vocal people there with strong opinions. They don't represent the community as a whole. Only people with strong contrary opinions would have anything to say in such a thread, so there's a selection bias at work here.
> Dependency management from _source_ form is a problem with Go, one that is actively acknowledge and being worked on
A bit off-topic from the original article, but I wrote a little tool for this very purpose. It's still not fully fleshed out, but we're using it at my place of work and it's worked out pretty well so far. I haven't gotten much coverage of it though, so if anyone wants to take a look at it and send some comments/suggestions, that'd be pretty awesome:
The nice thing about goat is that if you've been following go's best practices for development (namely, all of your import paths use the absolute import path, for example "github.com/whatever/myproject/submodule") then you won't have to change ANY of your existing code.
Doesn't this just come down to "because we are better programmers?"
Or maybe: worse programmers :P
A truly excellent programmer wouldn't need language support to making sure errors are handled for he or she knows no single piece of code is ever going to be reliable if error checking is omitted, no matter what language it is written in, and he/she automatically deals with it.
Then again, everybody makes mistakes and noot many are trylu excellent so a language guiding you into the right direction is certainly a Good thing. Easily proven if you look at the amount of SO questions where the answer basically is "well, if you didn't omit the error checking you wouldn't have to ask this question in the first place"
I guess I'm just a little sceptical that that is the reason. Seems a little too handwavy. Especially when so many other languages/frameworks had decent tooling, too. Consider, Rails would setup tests for you, as well. I fully grant that testing most websites is a good deal more involved than testing most utilities. But, if anything, I would suspect that is the true gem in go. Seems it is mostly used for utilities with well defined inputs and outputs.
Your 'truly excellent programmer' is like the ancient man of myth; they simply don't exist.
An excellent programmer understands that they are human, and uses tools to amplify thier abilities. Nothing about extra tooling makes you less of an excellent programmer, or extcellent programmers would only use machine language.
In every go thread I see all these favourable comments, but they only seem to ever compare to dynamic languages. Where's the comparison of go with OCaml/Haskell/Scala/F#?
Out of curiosity, what sort of setup do you use to develop Go? Do you catch mistakes like typos, etc at compile time, or do you use an IDE that would catch those beforehand?
I use GoSublime, it comes with GoCode so you get to know the method signature when the autocompletion pops-up.
Also in the settings of GoSublime, you can set it to run `go lint`, `go vet`, `errcheck` and `go build` at every save, and capture the output to put it in the guther. This way, you catch pretty much everything.
> A market maker is basically a wholesaler for institutional share traders and stock brokers. During my time there I replaced several key systems components with Go.
I'd like to hear more about how you deal with Go's garbage collection in a real time system like this.
He may not have had to deal with it at all. He replaced certain components of it with Go code, but none of them may have required real time guarantees.
So... Go rocks because
a) you've got to handle all the errors
b) there is no method_missing (or its ilk)
In general, all those features have been offered by compiled languages for years (even if they might've just been compiler warnings). What makes Go so special other than the lack of deep class hierarchies?
Sure, it's more "reliable" by design than 100% of those dynamic languages out there, but that doesn't really make the language more attractive. Coming from an OO world, I find Go's "inheritance" model far inferior to what, e.g., Haskell or Scala offer.
Given the comments here I think that I can finally understand somewhat why Go is taking off the way it is, but it feels like a faulty "solution".