HN2new | past | comments | ask | show | jobs | submitlogin

The only way to succeed with this is with heavy firewalling or VPNs. There are several unknown zero days in any application so just by opening up your application to 0.0.0.0/0 makes it possible for blackhats to get in. The only question is how much your information is worth for somebody. If you it is less than price of a brand new zero day you might be ok, but there are still the script kiddies and political blackhat organizations who mass deface any site that has a "zero day" vuln, (zero day means in this case: unknown to the operators of the site).


There are different layers of firewall, and VPN isn't really the issue here.

You can still have location-aware servers that can talk to eachother directly. This should be done over an encrypted channel as much as possible.

As to firewalls, on each server only allowing access from those ports applications run on is probably a good start. Better still would be publicly facing machines that act as reverse-proxies to backing servers that run said applications.

As intimated only approved machines (likely with client certificates and pinning to mac addresses, and probably only a limited number of accounts beyond that) can tighten things farther.

Putting your exposed (internal use) applications facing the internet doesn't mean unlocking all the doors. There are ways to mitigate and reduce the effects of a 0-day vulnerability in practice. The fact is that by making it all available anywhere, makes you think of the risks in a way that is actually better in practice than just believing because you are behind a hard shell it isn't easy enough to get to the soft-gooey center.

A hardened system involves more than firewalls and vpn access. A properly hardened system should be able to run over the internet. TLS channels with certificate/mac pinning alone can go a long way in terms of communications, and is far more than a typical firewall/vpn setup would offer for protection. This goes from SSH to your internal services. For that matter not exposing anything beyond SSH, and requiring tunnels for all communications may be simpler still.

Mix in LDAP for access, with accounts, machines and certificates all tied together and you have a pretty good base recipe for a hardenned system. That said, this isn't the only approach, just me rambling on about the ideas. There is overhead in terms of development, operations and management to setup such a system. Not everyone can implement such a system, given what they may be starting from. A smaller company would have an easier time for many cases than a larger company. It may require the use of a windows terminal server behind a secured channel in order to keep some critical applications (likely wrt finance). Other applications may be excessively costly to migrate, and others still may not have the necessary protections.

Given that most internal applications are web based these days it is slightly easier than at any other time in computing history.


> There are several unknown zero days in any application

Does this include the firewall and VPN?


no, I am more specifically talking about the hosted application. There can be security bugs in the firewall or VPN as well, I was focusing on purely what is the difference between hosting a vulnerable application on a local IP vs. a public one.


> There are several unknown zero days in any application

I think you want all your applications to authenticate the device and the user before proceeding to anything. This looks indeed impossible with third party closed source apps (if only because you can never be sure there is no backdoor).

Then, even if you authenticate every remote peers using TLS client certificates, you have to follow closely the vulnerabilities of your TLS implementation... But that should not be less manageable than to make sure your firewalls are reliable.


This looks indeed impossible with third party closed source apps (if only because you can never be sure there is no backdoor).

Seems easy enough: don't allow the app to bind to a port on any interface besides loopback, then put an authenticating reverse proxy in front of it that can actually receive remote connections.

If it's an HTTP service, you can use nginx with client SSL certificates. For other protocols, spiped[1] might be a good choice.

[1] https://www.tarsnap.com/spiped.html


I am not sure how the open source vs. closed source plays any role in this. You can support a feature with a closed source app as much as you can with an open source.

TLS implementations having serious problems as the last few years proved it. We need a more fundamental change in security protocols and implementations, using reliable crypto (for example elliptic curve cryptography) and implementing them in safer languages (like Rust).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: