Hacker News .hnnew | past | comments | ask | show | jobs | submit | b0's commentslogin

I've never heard the one-eyed trouser snake one before - I nearly spat my (non-Olympic-overlord approved) water in my ThinkPad :-)

They are by far the worst mascots ever.


I'll give you some ideas why it's an absolute fucking disgrace:

* Rapier missile launchers installed on residential flats.

* Dispersal Zone around Olympics which has kerfew, 42,000 strong mostly private police force and all legal rights suspended for residents and visitors.

* Armed drones patrolling the air around the site.

* Armed troops on site and the surrounding area.

* They built on Leyton Marshes which is common land.

* Warships have been stationed on the Thames.

* Transport disruption due to Olympic traffic lanes added and service alterations over the entire transport network (resulting in a 3 mile walk to take my disabled daughter to her physiotherapist - fun eh?)

* The fact that we have bankrupt hospital trusts which could have been bailed out several times over for that amount of money.

* The fact that NHS maternity and surgical services are turning people away and warning of serious disruption during the event.

* The fact that the Olympic village was promised for social housing. Out of 11,000 residences, 675 are going to it.

* There are branding police and official products which must be purchsed on site. You can't even take a flask of water with you incase they can't make money out of you.

* The fact that the political elite asked for the event despite polls taken at the time suggesting that the general opinion is that we didn't want it.

* The fact that "we're going to benefit from it", but there are no tangible benefits which are provable and every Olympic city has gone down the pan after the Olympics.

Ultimately, everything we expect and pay huge amounts for in tax is suspended for our political and corporate overlords. It's a militarisation exercise.

Pretty picture: http://www.terratag.com/Files/52189/Img/06/APOCOLYMPICS_DETA...

(product link for it: http://www.terratag.com/PBSCProduct.asp?ItmID=8759948)


My favourite is the promise that people in the surrounding areas will benefit from all of the funding, new developments, etc. BS. The "area" will benefit (gentrification leading to more expensive rent/housing), effectively pricing out the existing residents.


As the parent commenter and a previous resident (of Leyton), you've nailed it there. Our landlord upped our rent from 1000GBP a month to 1400GPB a month the moment it was announced.


CISCO got the contract for most of the network infrastructure based on the kit being donated to education afterwards.

Apparently it will be donated to $$$$ CISCO certification courses run at CISCO training centers. And of course training == education

So they get paid £££ to install the kit, then they get a tax right-off to donate it to themselves and then they get to charge people £££ to use it to become CISCO certified engineers!


Good stuff.

Can you please make the text darker - I suffer from some sight problems which make grey on grey quite hard for me to read.


New VMware is probably more expensive than old vmware and it's expensive enough already. Every portrayed advantage of virtualisation is not cost effective when you have to pay their extortionate prices. Xen/KVM and HyperV have a serious advantage.

For ref, I currently have to babysit a large vsphere installation (44 hosts) and it's a money pit.


It is worth the price if you need the features. If Running a few VMs on a server there is no reason to pay 6k for vSphere.


We run around 400 VMs across 44 (big scary) servers and I find that VMware only exists as a cop out for bad planning and bad architecture, albeit more cost effective than fixing it. It also shoots you periodically like 2Tb LUN size on our vSphere meaning we had to introduce mega-frigs to breach a 2Tb filesystem like NTFS links and sharding. 2Tb isn't much.

I think most enterprises are using as a big sticky plaster.


It has a few really nice use cases (legacy one-off winxp servers running software from defunct vendors, rhel3 host for old image software aquired by adobe and shelved? perfect in both cases), but the other stuff I've seen it put to has been horrible. jboss on linux on hundreds of hosts converted into a big blob of jvms running in vms...there's no point in giving up that much hardware when the number and config of the servers remains so consistent, and thanks to balloon memory vmware keeps stealing ram from the vms. it's a lot of money to solve the "I need a virtual kvm" problem(?)


That wouldn't be that horrid Adobe forms thing that sits on JBoss would it? (we have a VM for that!)


You are right! I think democratizing virtualization and lowering prices has to be at the top of their priority list.


Why would it be? Their customers are almost exclusively medium to large enterprises. They have no incentive to lower prices, nor to get smaller entities into the game.


I am not sure about that. The medium business virtualization market seems to be wide open fro disruption. Until now, most players have focused on large enterprises.


I thought TCP over TCP was a sin?


It is, but Nicira does IP over IP.


Technically Nicira does both. One of the tunnelling protocols it uses is STT (Stateless Transport Tunnelling) Protocol which they effectively made themselves (and is up for draft in the IETF). It's their mechanism of choice for communicating between the network controllers and hypervisors over the physical network.

It's not true TCP, but looks enough like like it to allow hardware offloading to the network interfaces of all the tunnelling, saving a lot of CPU power.

This allows for throughput speeds in an STT software tunnel to reach the same maximums as "raw" TCP through a given interface.


It depends where you live. My local Tesco is like fort Knox as it is right next to a council estate.


Of course there are. They just aren't as noisy which is why you don't think they exist.

Away from the buzz of the tech cities, the trendy "apps" culture and tech giants, there are lots. In fact, these people are actually the silent (and better paid) majority in the industry.

However, there is not much fanfare and most of them produce specialist software for niche industries.

I myself produce specialist financial modelling software which is all Windows desktop (C# + WinForms). I spent a few years doing logistics software (C# + WinCE + WinForms) and before that I spent 10 years writing a large desktop based system for distributed logistics and asset management for the defence industry.


This is all a terrible idea and it will fragment the Internet as we know it. The internet is not a budget strip-mall of different low grade outlets; it's more an orchard full of fruit you can discover and pick at will from millions of trees.

What people have invented is HTML applications, much as Microsoft promoted in the early 00's with some marketing and store ceremony around them.

Also, let's look at NaCl while we're here: it's basically a modern version of ActiveX.

Then we had silverlight, which was glorified Flash for LOB applications and could be out-of-browser. I wonder how long it'll be before Google invent that again.

All those are dead, and for a good reason.

Microsoft even sees that these approaches are just bad and has pushed away from them heavily recently apart from in the desktop and mobile space where they are 100% REQUIRED.

As far as their integration goes now, you can pin sites to the taskbar and there is no massive ceremony or framework around it - it's just a glorified bookmark.

Just because Google packages it up and throws it into the fad browser of the day, don't assume it's not the same golden turd that we've all hated in the past.

George Santayana: "Those who cannot remember the past are condemned to repeat it."


You're very confused about these technologies. NaCl and ActiveX aren't even remotely comparable. ActiveX is a method for deploying arbitrary binaries via the Web, and history showed it to be extremely dangerous. NaCl is a method of running thoroughly sandboxed native code; because NaCl is machine specific, it's deployed only through Chrome extensions/apps, and there's no intention to expose it directly to the Web. PNaCl adds a portable bytecode layer on top of NaCl, and is intended as a general Web technology. Realistically, it's more comparable to Java or .NET, only lower-level, more performant, better security, and no one's going to get sued for using it.


Not quite, and I'm not confused.

Both have sandboxes (ActiveX since Windows 6.0, IE7), both have restricted APIs, both run native code.

PNaCl is equivalent of silverlight which is cut down CLR.

More performance - I doubt it. CLR+JVM is pretty much up there. The moment you add any virtualization, trap code or translation layer to native code via NaCl which you will require for security, there is going to be overhead which will knock it inline with a VM architecture. Startup time might be less - that is it.

Better security - that's a lie. Virtualization on any layer never gave anyone better security. It's throwing stones in glass houses. The only hard security boundary is at the MMU/page table. As NaCl grows, you will see it break.

No-one getting sued? I'm sure the EU will have something to say when no other vendor implements it and Google uses it to leverage market share, much like Microsoft did in the late 90's and early 00's.


I'm sorry, but you are so grossly wrong in your statements that the only valid response is a bit of a fisking. Normally I'd prefer not to respond this way, but I've already engaged you, and I can't really let this much misinformation stand.

> Both have sandboxes (ActiveX since Windows 6.0, IE7), both have restricted APIs, both run native code.

ActiveX is a general purpose object API and has no sandbox at all. IE 7+ on Vista+ can instantiate ActiveX controls in a weak sandbox via low-integrity mode, but to imply it's comparable to the NaCl sandbox is just comically ignorant. NaCl validates the nexe's conformance and its subset of x86 instructions before it will run it (in that way being very similar to Java and .NET CLR). And NaCl runs entirely in an outer, system-level sandbox that denies all system and object access.

In contrast, IE's low integrity mode lets you read anything the user can, exposes massive chunks of the system as attack surface, and provides various writeable locations. On top of that, all non-trivial ActiveX controls in IE implement brokers which run fully outside the sandbox--something that's not even possible with NaCl.

> PNaCl is equivalent of silverlight which is cut down CLR.

Nope. And making that claim begins to underscore just how little you know about this.

> More performance - I doubt it. CLR+JVM is pretty much up there. The moment you add any virtualization, trap code or translation layer to native code via NaCl which you will require for security, there is going to be overhead which will knock it inline with a VM architecture. Startup time might be less - that is it.

Virtualization or trap layer? That's not even close to how NaCl works. It's really sad that you couldn't be bothered to read a one-page explanation of before you launched into this completely wrong-headed diatribe. Please, start here next time, so your trolling can at least be superficially informed: https://developers.google.com/native-client/overview

> Better security - that's a lie. Virtualization on any layer never gave anyone better security. It's throwing stones in glass houses. The only hard security boundary is at the MMU/page table. As NaCl grows, you will see it break.

Once again, premised on your total ignorance of the subject matter. Come back when you have at least a basic knowledge of the thing you're criticizing.

> No-one getting sued? I'm sure the EU will have something to say when no other vendor implements it and Google uses it to leverage market share, much like Microsoft did in the late 90's and early 00's.

I'm sure there was an attempt at making an argument in this last line, but mostly it just seems to be randomly scrambling for scary sounding words.


Excuse my ignorance on the matter but you make yourself look immature, arrogant and patronising and this does no credit for your project or yourself.

So basically, NaCl:

1. Validates the binary image. Of course that validation has no holes in it. When it does it...

2. Stops unsafe operations. Of course it never misses any and knows every instruction side effect...

3. Oh wait...

I'd put cash on someone breaking the sandbox, I mean after all it's perfect isn't it:

http://www.matasano.com/research/NaCl_Summary-Team-CJETM.pdf

You can't build a flawless sandbox on top of a system by closing the holes one by one, especially on x86/x86-64. The number of edge cases is immense.


I'm sorry if you feel slighted, but I'm only attempting to dispel your ongoing stream of misinformation. And even after being corrected, you've persisted to the point where it's hard to perceive your behavior as anything short of intentional malfeasance.

As for the strawman in your latest comment, no one made any claims of "a flawless sandbox." I rightly pointed out that the security model of NaCl is far more robust, and you've offered nothing to counter that. Now, of course, software is going to have bugs, and the ones listed in that paper are significant. Fortunately, no combination of those bugs could have breached the outer sandbox, and would not have represented a real-world system compromise.

The origin of that paper also circles back to a very important point. We realize that we need to attack security from many different angles (fuzzing, sandboxing, bounty programs, etc.). And that paper you cited was actually the result of Google sponsored competition in 2009 against a pre-release version of NaCl. The authors were the second place winners, and have continued to research NaCl's security both as independent researchers and paid consultants. (One of them is actually presenting at Black Hat on NaCl security this week.)

My point here is that an objective read of the paper really paints NaCl very positively from a security perspective. Had you actually looked at the content rather than just made an assumption based on the title you would have been aware of that.


The real problem with ActiveX wasn't the technology, it was the API. Code relied on a large, complicated, ever-changing API which was practically impossible for others to duplicate. Win32 was for the most part documented, but even a decade later Windows XP programs are often not useable in WINE.

This is the same problem with NaCL. The API is complicated and has a bunch of browser-specific features. Firefox can't just include "pepper.c" and have NaCL support, there's tons of work involved. One result is that Firefox won't support Flash for Linux anymore. Implementing Pepper, even though it's "open source" is more of a hindrance than not having Flash.

NaCL is better technology than ActiveX, but uses the same idea of leveraging a OS/browser-specific API as a handicap for competitors.


The earliest versions of NaCl attempted to align to the mess that is NPAPI, but that was literally years ago. For the last few years NaCl has used PPAPI, which is well-defined, more loosely coupled to the web, and has a clean versioning mechanism: https://developers.google.com/native-client/pepperc/


I see hundreds of APIs and beta APIs. There's no source code repository. The python scripts to download a SDK refer to Google "private_svn".

So Google spends 6 months working on the next version and Firefox and Opera don't even get access to it until the next SDK and are 6 months behind, and they have zero input in how it evolves. That's not how open source should work.



Your "source" link doesn't have source, see:

http://src.chromium.org/viewvc/native_client/trunk/src/nativ...

The other is a link to a build of pnacl sdk that also doesn't include source (src/ folder is empty).

Am I being dense here? Where's the hg, github, code.google link that actually has the source code? I couldn't find where the source is. You would think that would be pretty obvious for an open-source project, like maybe some giant button on the project page.

Anyway it doesn't change the point. This isn't being developed as a public open-source project. It isn't designed to be easily added to browsers other than Chrome. Like ActiveX, it's being used as leverage to make one browser better at the expense of others.


I can appreciate that you're unfamiliar with the build and dependency management tools used by Chromium and related projects. However, it's disingenuous to imply that somehow the project is any less public or open because of that.

Some dependencies are pulled in and set-up by the checkout scripts, which is not an unusual degree of complexity for a large group of projects with such broad dependencies. It's all clearly documented and everything you need to know to checkout, build, and contribute code is linked from right here: http://www.chromium.org/nativeclient


I think you pretty much proved my point. NaCL is a sandbox, the very thing that should have the cleanest, clearest delineated lines. But there's no standalone version, no stub version. And you seemingly need a PhD in Chromium to even find the source much less get it working in anything else.

I guess Mozilla is unusually capable then because their download link is right here:

https://developer.mozilla.org/en/Mozilla_Source_Code_%28Merc...

Pretty simple.

So you're working on Chromium and think navigating a buildbot to get python scripts that download from a private svn is not unusual... ok fine, but don't be surprised if people don't want to touch it (or even can figure out how to). I certainly understand better now why Mozilla would rather just drop Flash support.


So, the new claim is that making a directory and running a script is too hard for you: http://www.chromium.org/nativeclient/sdk/howto_buildtestsdk

Seriously, this is just absurd. I don't know what you think you're arguing at this point, but I don't have the energy to correct you anymore.


Looking at the code, you have to check out the source and then run another script to check out the source from Google's private source control system/depot.

Why it isn't just in a public repo, I don't know. It looks like it is set up so they can pull it at a moment's notice.

I agree with your comments.


This is just silly. You can't say that you can pull the code from a public repository, and then claim it's private. If it's set up so you can pull the code, and you do so successfully, then it's by definition public. It's totally understandable that you're unfamiliar with how the the projects and dependencies are distributed. The Chromium projects are one of the largest and most complex codebases you're likely to encounter. However, it's all clearly documented and public, with numerous open source contributors. So, it's very disturbing that your first thought is not try and learn why, but rather to insinuate some irrational conspiracy theory.


Sorry I will clarify with a genuine question: where is the public source code with changelog?

It seems like a snapshotting tool, so you check out the stub and then dump the snapshot into it from the depot.

(this is similar to how Windows is built inside Microsoft).


Just follow the instructions for checking out and building NaCl: http://www.chromium.org/nativeclient/sdk/howto_buildtestsdk

Everything is in public svn and/or git repositories. The thing that's confusing you is probably how modules are split out as dependencies. They're not checked directly into the tree, and instead are listed by repository URL and pinned revision in DEPS files. On checkout and sync gclient pulls the correct revision from the appropriate repository: http://dev.chromium.org/developers/how-tos/depottools#TOC-DE...


I don't understand your argument here. NaCl is an open source and openly developed project that anyone is welcome to participate in. The repository, tracker, and mailing lists are all public. In fact, the NaCl team has made numerous efforts to involve the other browser makers (even if they haven't shown interest so far).


> "The internet is not a budget strip-mall of different low grade outlets; it's more an orchard full of fruit you can discover and pick at will from millions of trees."

Maybe it's just me, but I'm having a difficult time trying to comprehend how this statement relates to your argument. Some perspective please?


It's a commentary on how providing isolated islands (app stores) damages the distributed nature of the world wide web. Effectively every app technology is a landgrab by some entity who wants some exclusive chunk of the web with their own rules, usually for commercial gain.

Basically the principle turns the world wide web into another WalMart or McDonald's rather than a vast library.


You keep giving Microsoft's proprietary technologies as bad examples vs Google's open source ones, and keep saying it's the same. It's not. Other vendors weren't allowed to use ActiveX or Silverlight, nor could they help improve it, and fix their bugs.


Some little known facts which point to your assertions being wrong:

1. IE4 worked on UNIX (Solaris/HP-UX) and supported ActiveX.

2. MainSoft provided tools to port your ActiveX to UNIX (Usually a straight recompilation and little else required).

3. Other vendors are allowed to use Silverlight - look: http://www.mono-project.com/Moonlight

4. ActiveX,COM,MSRPC are all open specifications here: http://msdn.microsoft.com/en-us/library/dd208104(v=prot.10)

1-2 died because there was lack of demand.

3 died because there was lack of demand and MS decided it was the wrong route.

4 is used by MANY open source projects from Samba to tsclient.

As far as improving things goes, I've had many a thing fixed by Microsoft over the years. They ALWAYS solve a problem.


I'm not sure I agree. I've watched large numbers of people crash and burn running 'agile' processes.

Agile doesn't work at all with large numbers of people on code that is constantly changing. Regardless of how you package it, the only way to achieve coherency and scalability of product development is through extensive planning, solid architecture and loose coupling of components.

Agile throws those three concepts out of the window for time to market. Sure your first few iterations will survive this, but as your product grows, so will coupling logarithmically. This eventually cripples you.


I certainly agree that a lot of "agile" projects are clusterfucks, especially large ones. Of course, that's true of all software projects. And a lot of what people sell as "agile" is bullshit. So I'm not sure how much that proves.

I also agree that well-run agile approaches throw big up-front design out. But I think they can happily achieve solid architecture and loose coupling.

There's nothing you can achieve with up-front planning that you can't achieve by refactoring your design after a release. The main differences are that you need some supporting practices to make refactoring economical, and that you have much more information available to you after release than you do before-hand.


Well actually you're wrong on the following point:

There's nothing you can achieve with up-front planning that you can't achieve by refactoring your design after a release

If your application is relatively standalone then yes, but if you have heavy APIs and integration (which value adding applications usually do), you're up shit creek.


Depends on what sort of API you mean. Internal ones are fine, so I suppose you're talking about public APIs. Which again are fine on the client side; it's just the server that can be harder.

But I still think the way to good public server APIs isn't to sit in one's arctic Fortress of Architecture and think real hard. I think you just build and iterate in private, refactoring as you go, and then switch to a closed beta. And of course build your protocol in such a way that it's reasonably extensible.

Up-front planing is still no panacea. You will have to change your protocol someday. Someday soon if you're up to something interesting, because the world doesn't stand still. And even if it does, your competitors won't.


I've worked in a couple of companies that did this. We had the following flow and it worked great:

Task assigned to developer via email, developer takes current release tar from ftp and untars, does work, creates patch, forwards patch to colleague to review, forwards to release manager who integrates all incoming patches, drops into a new tar, releases to ftp.

Some of this was automated with a few hundred lines of perl. The rest was on a whiteboard.


Interesting

They could skip the ftp part, because git saves bandwidth

Still, this is a good way to do the work


They use scp now rather than ftp. They don't use git because it's too complicated for contractors to handle.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: