You can get the full stack of the crash point (and a few extra frames that handles the stack generation) in a deferred recover, ie https://play.golang.org/p/XvZteY6Y6fh
github.com/pkg/errors is a simple way to dump expensive stacktraces into each new/wrapped error. Or you can just panic and recover in your own app, it's possible though discouraged.
If we're talking outliers then Titan, the largest moon of Saturn has a surface atmospheric pressure 1.4x more than Earth's, while being 2.3% of Earth's weight
A legit criticism, but who would transmit an 8Mb datablock using LZ4 over a network? There are far better compression schemes whose lower performance tradeoffs become negligible when network latency is involved.
I am using LZ4 currently for a project compressing Netflow-derived data for a major ISP, even then it's in 64Kb chunks for quick access and decompression.
The question is whether every single router between the attacker and the destination accepts that block and passes it on. TCP window scaling will get an ISO across a building faster, or data from EC2 to S3 faster, but it won't get your 8MB block through the internet backbone.
And, keeping in mind, that if you do manage to break something, it'll likely be the very first router in the chain, not the destination system. Exploiting said router in such a way that it will exactly rebroadcast the problematic packet (i.e. perform its intended function), from within your own shellcode--and then continuing that chain all the way to the destination--would be quite a feat.
That's not particularly relevant, you're thinking at the wrong layer. Seen from a program sitting on top of TCP, there's no such thing as a TCP message/block. You can very well grab a single 16MB chunk with 1 read() call, if the socket buffer is sized for that.
Or you can append incoming data to a big internal buffer which you pass through decompression once that buffer fills up.
You would have to look at protocols sitting above TCP that transfers compressed data, that typically would define a message based protocol, and how an implementation decompresses that data.
Depending on where the attacker and where the potential victim are located and depending the protocol used and on how the network is structured all of that may or may not be a problem for the attacker. Maybe he already owned the last router before the victim via a totally unrelated issue and it's just a single hop.
I concur that this bug is unlikely to be exploited in the wild but I'm generally opposed to the statement "yes, that may be a critical problem, but who would do that?!" People that want to attack you do that.
Today I read an article where the response to a critical flaw in a process was "but that's not a real world scenario, those actions would need to be done deliberately and that would require criminal energy." Phew, we're safe then. Only positive energy around, move along, nothing to see here.
I wasn't so much saying "phew, you can relax" as I was saying "this is why you haven't seen a new Internet worm based on this concept."
It can still, of course, be done in one-off scenarios (attacking a peer on a guest wi-fi network is one easy possibility) but it's not Heartbleed-level scary, because you can't just "scan the Internet" for the vulnerability, and attack every vulnerable thing you find to useful result.
> I wonder how much it'd shock Eugene if someone pointed it out to him that, in American racial discourse, Irish and Italian folk were once considered "not white."
The title of his article is "How the Asians became white" which as Eugene pointed out in the comments, is an allusion to "How the Irish Became White," by Noel Ignatiev.
So it wouldn't shock him at all, that's his point.
> That means that no number may be more than 100 away from any other number once fully sorted (7 bits)
It's possible that 90% of numbers are <1 million, and the remaining 10% are > 91 million. That's a 90 million delta possibility.
You would of course use a delta scheme that can encode any number using O(log n) bits. At that point having a lopsided distribution of numbers actually helps you. Cramming the first 90% of the numbers into 1% of the space can save you about six bits per number. Even if you spread out the remaining 10% the price is going to be far less than sixty bits each.
Most people will visit the page, scroll down a screen or two and leave it since they're not game programmers.
So, just speculating, he's no stranger to bandwidth spikes so perhaps the approach employed significantly saves on bandwidth during these periods, ultimately whether it's a conscious attempt to save or just a webdev quirk I don't know.
You got it. I implemented lazy-loading for the precise case you described: Save bandwidth from people that don't want to read the full article.
I tested it on Chrome, FireFox and Safari: I did not find it annoying....but maybe it is because I tried to put myself in the shoes of someone ready slowly and I have a DSL bandwidth...
My experience with their B+ Tree database is that when your database size hits a Gb or more, closing the database can take up to 10mins or more (depending on writes queued), and opening a database that was not closed correctly rebuilds the entire database.
github.com/pkg/errors is a simple way to dump expensive stacktraces into each new/wrapped error. Or you can just panic and recover in your own app, it's possible though discouraged.