HN2new | past | comments | ask | show | jobs | submitlogin

Whenever I read these kinds of posts on this website I think of Sterling Hayden in Dr. Strangelove. (The crazy SAC commander who thinks the Russians are plotting to steal Americans' precious bodily fluids).

I understand that people don't trust the NSA/US government. And they shouldn't: the US government will always put its interests above yours and mine, and above those of allied countries.

At the same time, this stuff is bordering on parody. Very few of us (maybe none of us) need to worry about "the NSA MITM-ing our NPM packages". If you're that paranoid then you shouldn't be using github, NPM, or non-local dependencies. And of course you should be reviewing everything manually.



I didn't mean that this is going to happen. I wanted to give an example of a potential threat. My idea was to show one of many problems with centralization and relying so much in GitHub and GitHub SSL certs.

Maybe we can start signing our commits to increase security giving the potential threat. The same way that after the Snowden revelations we started using more and more HTTPS.

We can also think of better ways of sharing/releasing open source code. Debian has a pretty neat system with keys so it's pretty safe to install software from their repos [1]. Maybe there is a better system to be develop than just grabbing whatever from GitHub[2] and running it in your machine.

[1] https://en.wikipedia.org/wiki/Debian#Development_procedures

[2] https://github.com/mklement0/n-install/blob/master/bin/n-ins...


There's definitely a need for a better system. Some would argue that we have package managers specifically to help solve this problem. Yet, for many developers who just want a "good enough" install system without thinking or working at it much, curl-and-exec gets the job done.

That so many people aren't thinking about security at all is a sad comment on the state of software engineering. But perhaps inevitable, given our cultural history of favoring freedom over security.


> I wanted to give an example of a potential threat.

I really don't like this line of thinking. It's the same one used by news organizations to plump up their stories, or by politicians to make an improbable threat seem more real. In both of those cases, I think the long-term effect is to cause the public to think that very rare events are a lot more common. The result is not a culture of wariness but a culture of fear.

I'd much rather people present "worst-case potential threats" instead as "likely potential threats."


In security many things are "potential" threats. Just being unlikely doesn't mean that the threat doesn't exists. For example, a guy found a potential threat in rails[1], and rails developers dismissed his findings as unlikely exploitable. Then the guy go and hacked GitHub to prove that the issue was real[2][3] and that even the best rails developers were vulnerable.

[1] https://github.com/rails/rails/issues/5228

[2] https://github.com/rails/rails/commit/b83965785db1eec019edf1...

[3] https://arstechnica.com/information-technology/2012/03/hacke...


OK. Let's not talk about potential threats. That's the language of fear, control, and paralysis. It's the language of liars and demagogues and exaggerations.

Let's talk about risks and goals. The language of opportunities.

An approach of `curl | bash` takes a needless amount of risk to accomplish its goals. It can do far too many things, of which it actually needs to do a small subset. It offers a lot of opportunity for bad things to happen to seize the opportunity for the things we want. Maybe there are ways to do the same things, to get the same ends, without taking on so much risk.

How do you feel about this subject?


Good question. The specific problem that I'd like to tackle is sharing code as safe as possible.

The happy path is that you have some code to share and I want to get it exactly as you wrote it.

Right now the current issues with just using GitHub to share your code are the following:

1. GitHub app gets hacked and let's someone else do a commit (like the rails hack mentioned above).

2. GitHub employee modifies files in prod servers.

3. GitHub cloud provider gets hacked

4. State actor with lots of resources MITM GitHub.com domain and internet traffic and you fetch something else.

I think all this problems could be solved if:

1. Git enforces that all commits must be signed.

2. There is a decentralized list of usernames and keys.

This feature doesn't exists in git but it would be great if you could run `git clone` and it rejects the cloning if not all commits and tags has been signed.

But what If someone hacked GitHub to add a commit and she or he signed the commit. We need some kind of CA to only accept signed commits from the right people.

So there should be a fixed list of committers allowed in the repo and git would have to enforce that as well.

Then you have the problem that if all public keys are stored in GitHub then you almost get back to all the problems again of GitHub getting hacked. It would be great to have as many copies of usernames and public keys as possible. Something like a blockchain would be a good fit.

To recap, by having a decentralized system of users and public keys, and making git validate that the commits are signed and from the right people, we could have a much more reliable way of sharing code.

Additionally, we could have an "audit" system on top of all this were users can review code and mark it as "good". Then if you have a repo with a tagged version that has 5 reviews you can hope that it's pretty safe to run that version. Because it's a single system of usernames, you can check who are the reviewers.

It might be a bad idea to `curl | bash` a script from an stranger but at least you remove the risks of your code being delivered by a third party.


> To recap, by having a decentralized system of users and public keys, and making git validate that the commits are signed and from the right people, we could have a much more reliable way of sharing code.

I can see that you've put some thought into this. I can also see you don't generally spend most of your time thinking about security. This is not a bad thing! Most people don't!

But it does occasionally show up in sloppy thinking about system design, such as when you reflexively conflate a commit being signed by a key with a commit being from the person who is expected to own that key. It means you didn't stop and think about how to integrate rapid key revocation in case of compromise, or how to rotate keys over time.

Or how social review systems tend towards unreliability, as reviews are left by those who are not experts and users trust aggregate numbers from such. How meaningful is a 4.5-star average from five reviewers on a cryptography library, if the reviewers are five random people whose expertise you know nothing about and are ill-equipped to judge?



I haven't. Thanks for sharing!


It also used to be considered paranoia to think that the NSA might do all the things in the Snowden leaks, but here we are.


I personally am not worried. If I was running some nuclear centrifuge in Iran/N.Korea/etc. then I'd be worried.


This is another phrasing of the worn-out "If you have nothing to hide, you have nothing to fear" argument, and it doesn't have any place here.


Sure if I am the moral equivalent of an authoritative gov't seeking nuclear arms.


That’s why I only drink rainwater and pure grain alcohol. Purity Of Essence.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: