HN2new | past | comments | ask | show | jobs | submitlogin

We tried this where I worked (with the exception of the evil desktop app financial program)... and had to retract after a zero day defacement in one of our web apps. In the meantime we also learned that keeping all of your web apps 100% up to date at all times is really freaking difficult. The good news is that the (failed) attempt got us off of a few client side applications and made us much more platform agnostic than we were before.

If you have the resources of Google it's a bit different, especially if all of the software is custom and developed internally.



You can't succeed with this model by just setting your firewall to allow 0.0.0.0/0. This approach still requires defense in depth, and a holistic view of security. If someone was able to deface your web app, then your company wasn't actually using all the components that are required to make this model work (such as authenticated devices, device patch management, and user 2-factor authentication).


Device patch management only works when there are patches available, ever heard of a 0day?


What are "authenticated devices"? The closest I can think of are client certificates being installed on the devices and used as a first-level of authentication. It could be anything from TLS client certificates to VPN certificates.


Yep, client certs installed on a device with verified boot and an account authenticated via 2FA would be a good start.


If you do it right (store the cert in a TPM) the device itself actually is a second factor so you don't need anything other than the device.


Wouldn't that require a browser plugin to login with?


You can have a SSO server that requires a TLS client certificate signed by your own internal CA, or you could put it behind a VPN authenticated with the certificate. Either way, with no custom software, you get device and use authentication.


Was this a custom app? My company puts most of our stuff on the internet, but most of it is stuff we've bought from Atlassian and the like.


The only way to succeed with this is with heavy firewalling or VPNs. There are several unknown zero days in any application so just by opening up your application to 0.0.0.0/0 makes it possible for blackhats to get in. The only question is how much your information is worth for somebody. If you it is less than price of a brand new zero day you might be ok, but there are still the script kiddies and political blackhat organizations who mass deface any site that has a "zero day" vuln, (zero day means in this case: unknown to the operators of the site).


There are different layers of firewall, and VPN isn't really the issue here.

You can still have location-aware servers that can talk to eachother directly. This should be done over an encrypted channel as much as possible.

As to firewalls, on each server only allowing access from those ports applications run on is probably a good start. Better still would be publicly facing machines that act as reverse-proxies to backing servers that run said applications.

As intimated only approved machines (likely with client certificates and pinning to mac addresses, and probably only a limited number of accounts beyond that) can tighten things farther.

Putting your exposed (internal use) applications facing the internet doesn't mean unlocking all the doors. There are ways to mitigate and reduce the effects of a 0-day vulnerability in practice. The fact is that by making it all available anywhere, makes you think of the risks in a way that is actually better in practice than just believing because you are behind a hard shell it isn't easy enough to get to the soft-gooey center.

A hardened system involves more than firewalls and vpn access. A properly hardened system should be able to run over the internet. TLS channels with certificate/mac pinning alone can go a long way in terms of communications, and is far more than a typical firewall/vpn setup would offer for protection. This goes from SSH to your internal services. For that matter not exposing anything beyond SSH, and requiring tunnels for all communications may be simpler still.

Mix in LDAP for access, with accounts, machines and certificates all tied together and you have a pretty good base recipe for a hardenned system. That said, this isn't the only approach, just me rambling on about the ideas. There is overhead in terms of development, operations and management to setup such a system. Not everyone can implement such a system, given what they may be starting from. A smaller company would have an easier time for many cases than a larger company. It may require the use of a windows terminal server behind a secured channel in order to keep some critical applications (likely wrt finance). Other applications may be excessively costly to migrate, and others still may not have the necessary protections.

Given that most internal applications are web based these days it is slightly easier than at any other time in computing history.


> There are several unknown zero days in any application

Does this include the firewall and VPN?


no, I am more specifically talking about the hosted application. There can be security bugs in the firewall or VPN as well, I was focusing on purely what is the difference between hosting a vulnerable application on a local IP vs. a public one.


> There are several unknown zero days in any application

I think you want all your applications to authenticate the device and the user before proceeding to anything. This looks indeed impossible with third party closed source apps (if only because you can never be sure there is no backdoor).

Then, even if you authenticate every remote peers using TLS client certificates, you have to follow closely the vulnerabilities of your TLS implementation... But that should not be less manageable than to make sure your firewalls are reliable.


This looks indeed impossible with third party closed source apps (if only because you can never be sure there is no backdoor).

Seems easy enough: don't allow the app to bind to a port on any interface besides loopback, then put an authenticating reverse proxy in front of it that can actually receive remote connections.

If it's an HTTP service, you can use nginx with client SSL certificates. For other protocols, spiped[1] might be a good choice.

[1] https://www.tarsnap.com/spiped.html


I am not sure how the open source vs. closed source plays any role in this. You can support a feature with a closed source app as much as you can with an open source.

TLS implementations having serious problems as the last few years proved it. We need a more fundamental change in security protocols and implementations, using reliable crypto (for example elliptic curve cryptography) and implementing them in safer languages (like Rust).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: