Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Is using Docker worth it or just adds a lot of complexity?
14 points by carrygan 43 days ago | hide | past | favorite | 23 comments



It's worth it for me, for what it's worth. I have a pile of server applications that I can seamlessly upgrade and rollback and not worry about clashing dependency versions on my root OS.

IMO, using containers on a server is a bit like using Flatpak for your desktop apps. You can have a stable, well-configured base and still run the cutting edge stuff pre-packaged in a convenient way.


Every new thing adds complexity to your stack. So yes, docker adds complexity. The question you should be asking is "Do I have a problem that docker can solve and do the benefits outweigh the overhead of docker in my specific situation?". The answer can be both yes or No depending on your situation/context.


but but but what if I just want a whale as part of my workflow so when I finish I can pat myself on the back for a job whale done



If you compare containers to VM then for me containers worth it because I do not need to manage OS. I use containers in cloud. But everything depends on you use case and resources that you have


"worth it" is hard to answer unless you tell us your situation and goals.


Usually yes. It's not that much complexity in and of itself, although how you use it could affect that (eg docker compose-- I prefer a big fat docker image to a fleet of interconected ones for the same reason static linking beats dynamic by default -- it may perform less well, but you can optimize for that later if you need it, but deployment and consistency is easier for static linking)

But like, if you don't need it, you don't need it. It depends on what you're doing. If you have multiple runtime dependencies, it will help a lot. If your target is a single static executable (eg you use golang or java with a all-in jar target) and no runtime dependencies like a database (or even if your only runtime dep is a single SQL database connection or just a single JVM runtime and your only hassle is documenting the correct JVM version to use) then it may not be adding that much value especially early on.

Never found it too hard to add it, either. A Dockerfile is a glorified shell script so if you have a README for installing the whole mess on a normal machine, following that README again (and as always, finding how inaccurate it is lol) is all you really need to do.

For the typical enterprisey legacy project with over a dozen runtime deps and micro and macro (lol) services all over... yeah, it helps a ton to avoid tons of version hell deployment works-on-my-machine drama. It was so nice when it started coming out back in the early teens -- so much so, that it was worth using for all nonprod even when it was still super flaky and not suitable for production yet (eg remember how bad its disk space leaks used to be and how you needed to script that yourself?)


It works well for me. I declare how I want my server to be set up, and I can trust that my local and prod setups are exactly the same. I can also let other people work on the same project and trust that their environment will be the same.

It was hard to pick up at first, but now it's undeniably an improvement to my workflow.


I don't use Docker for any of my personal projects. I've found that a bit of bash and systemd can do pretty much everything I need. I used to be a big Docker fan, but at some point I realized that I was just avoiding learning proper server management and core automation tools (like bash, which isn't so bad, it turns out).

If you work with other people, then yes, definitely use Docker. It adds a layer of foolproofing that's much needed in larger teams, and it's basically a lingua franca for deploying things these days.


To me, its completely worth it. I am able to replicate boxes and environments rapidly without having to spend time re-creating or worrying about system issues. Also makes your dev and your prod easier to manage and removes variabilities.


It just adds complexity, as a default. If you are balancing a delicate mix of system dependencies, it may be worth it. For example, certain scientific software (eg for computational chemistry) may be easier to manage and set up with Docker.


Works for my use case: packing dependencies and deploying. If you're considering alternatives for your use case. What do you think Docker replacement would look like?


Docker compose, swarm - yes.

K8s etc - no, except if your product is really really big _and_ complex. The rest are using it because a) no better alternatives b) mindlessly following hype


Depends on the use case, and the alternatives available to you.


I think it's worth it as compared to the idea of spinning up an entire system for one group of processes. Docker can be very easy if its done right.


Worth it, better if docker compose. Makes lift and shift easy.


With ChatGPT, it's worth it.


I was scared of Docker and React, but no more.

I thought, why can I not run shell scripts to pull my code from git, update dependencies to requirements.txt or the package.json, and simply use pm2 as the webserver host and reload all the apps? So much easier to deploy, since installing pm2 and some shell scripts with env variables for deployment just takes about 5 minutes, right? No need to securely transfer or keep these images with weird code! Thing is, I was ignoring the pertinent use cases...

You have to understand the best ways to use it for your scenario and how it is typically used.

Use case 1. Prototype demo using funky services like file conversion - Running "docker run fileconversionapi" is as fast as grabbing an API key, but no worries about usage billing! And it's super easy to network for local or eventually production, either with host network, or docker swarm. Value - Self-reliance.

Use case 2. Actually self hosting apps.. might include a long running database docker image etc, using a local mapped volume to avoid data loss. Super quick, easy defaults. Value - Privacy

Use case 3. Live production app - building from working code This is where people go wrong. They have a cool python server script with loads of dependencies and even some data / model files and want to put it on production. Rather than run "pip install -r requirements.txt" on each new server or their 1st cloud server, they turn to docker. Common mistakes - they try and put environment variables in the wrong place (in the code itself, in ENV) or they try and put the code into the docker image with the requirements/keys... e.g. git clone <repo> the pip install -r requirements.txt and distribute the docker file rather than the built full docker image). Doesn't work cause the docker runs can fail with different code changes to the repo etc. Essentially little point to automating build scripts like this compared to the benefit of sharing the latest image. - Value: you can delay productionising code until later and it's still OK. But if you are a newb (like I was) at docker, you'll probably do it wrong, even if you avoid security pitfalls due to wider coding experience. You'll be needlessly rebuilding the docker image... Protip, building the docker image should happen once every 6 month at most for clean builds. Multiple layers should only happen when optimization becomes vitally important.

Use case 4. Live production app - rolling updates / regular code / sharing image for developer nr 2. and 3. to instantly onboard You can switch easily to AMIs with startup commands than exec docker code. You can rely on older AMIs or use kubernetes. Networking is simple generally. A docker contain does not know about the VPC, so it can't lay expectations that cause network hell in particular. - Value: high level control of the complexity in a range of deployment options, from single instance to multiple instance. Massive Value - you can move docker images between cloud providers as .tar files and instantly set up your infrastructure with valid letsencrypt SSL certs and everything from the last host.

Use case 5. You want to clearly establish a separate microservice, maybe because your new code requires python 3.10 not 3.7 or something. You're not even sure if you'll deploy it, you are just testing, but if it works, you need it in production tomorrow. Well anybody reading "PYTHON310" in the short docker image can tell why you did that a lot more obviously than a random VM once named "python 3.10 code" that now does random machine learning things. Value - throwawayability


Imagine a type of executable that could run on any Linux server. Would this "executable" be worth it?


Snaps seem to have made Ubuntu worse.


True, but not for any of the reasons cited above, or any reasons necessarily relevant to Docker (or containers, more broadly).


What is your use case?


At least I'm not the only person who thought that the question was under-specified and that use case matters.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: