Wow. I wasn't expecting this level of interest, not in a million years. I'm very grateful (and a bit overwhelmed!). Thank you for your thoughtful comments. I just added a few more clarifications to the document, mostly inspired by your interactions.
Redis gives you fundamental data structures (lists, sets, hashes - plus a few other like zsets and hyperloglogs) that represent a powerful way of expressing computation. I'm not acquainted with the full spectrum of Postgres' data structures, so I cannot do a fair comparison; all I can say is that Redis is awesome and I can't get enough of it.
Postgres essentially gives you one top-level data structure, the table. For each column however it has a wild variety of data types and structures. It similarly has a large variety of indexes, making it possible to do performant lookups on all those datatypes as your use-case requires.
Your article showed up at exactly the right time for me :). A project I'm working on has been increasing in 'devops' complexity, and I'd been doing some research into Docker/Ansible/etc. because I felt that perhaps my simpler solutions weren't ideal.
After some research I concluded my approach was still preferable, at least for now, but I still felt some unease because everyone seems to be using Docker and whatnot. Your article really helped me feel more confident in my choices. Thanks!
Glad to hear! Part of the reason I put all of this out there is to offer a different point of view that has worked for me and others I've worked with, and to stimulate debate and fact-based (or at least experience-based) interactions regarding backend lore, instead of pining for best practices that often are under-scrutinized.
Argh. long comment got deleted. So, short version:
1. I already know how nginx/apache/linux work, so it's been pretty easy to spin up a new VPS and configure it to handle potentially multiple apps, rather than learning how to work with Docker or the like.
2. I run pretty much the same stack everywhere (Phoenix/Elixir) and I don't foresee running into issues where I need to run multiple versions of, say, elixir or node.
3. For deployment I quite like git + hooks to handle things. and because of some of Elixir's particulars, it's real easy to deploy a new version and recompile without having to completely restart the app.
4. I don't like the idea of adding another layer of complexity. The way I see it, if anyone else needs to work with me on the 'devops' part, they better know how to work with linux/nginx anyways. And if they already know, they can pick things up pretty quickly. Knowing how to use Docker would just add another thing for them to learn, another thing for me to keep an eye on, and another thing to follow updates and security issues on.
5. I'm very much of the article's school of thought that scaling stuff on a single server for quite a long time is possible. Perhaps especially with Phoenix/Elixir. So at worst I could see myself moving an app to a separate VPS, but I don't need to 'orchestrate' things and whatnot.
6. I haven't properly researched this, but I suspect that, for quite a few of our clients, running things on a separate server is a legal requirement. A container would not only be an alien concept to them, but it might very well just not be allowed (would love to hear input on this though).
tl;dr: I just don't need it. I can spin up a server and get everything set up in < 30 mins, some of it done manually and some of it with some simple bash scripts. and I can't think of a very good reason to learn and implement another layer that sits in between my server and the running app(s).
All that said, I don't know what I don't know, so I'd really love to hear where perhaps I might benefit from using Docker/Ansible or the like! I'm not at all against these tools or anything.
EDIT: I'll add that I do think perhaps Docker might be useful for local development, especially when we start hiring other devs. Am I correct in assuming that's one of the use cases?
> EDIT: I'll add that I do think perhaps Docker might be useful for local development, especially when we start hiring other devs. Am I correct in assuming that's one of the use cases?
As I understand it, something like that is supposed to be one of the big draws: Since everything runs in a container the environment is the same regardless of the system packages, so you shouldn't have any "works on my machine" bugs.
That said, we had a session recently where a system that was just converted to docker was being handed off from development to maintenance, that also acted as a crash course for anyone interested in docker on the other development teams. I think only a fraction of us actually got it working during that session, the rest having various different issues, mostly with docker itself rather than the converted system.
Interesting. Thankfully most of my current work involves a stack that works fine on Linux/Mac, and I don't foresee many other devs needing to work with it, least of all devs using Windows.
But I recall doing contract work where it took me and every new developer literally a day (at least) to get their Ruby on Rails stack working with the codebase, and where we often ran into issues getting multiple projects running alongside on account of them needing different versions of gems or Ruby. I can see how Docker would be a great timesaver there.
Are you using ASDF? I generally commit a .tool-versions file to source control for every project. It specifies the versions of each language (usually Erlang/Elixir/Node) to be used for a given project.
I started using ASDF for Elixir projects, but over time I've basically replaced RVM, NVM and all other similar tools with it.
Back in the RoR days I guess it was RVM that I used. I've also used NVM in the past.
But honestly in my current work I've not needed it yet. As far as Elixir/Phoenix goes, things have been stable enough so far that I've not needed to run multiple versions, and I try to rely on node-stuff as little as possible (more and more LiveView), so I can get away with either updating everything at once, or leaving things be.
Furthermore, when various apps diverge too much, I usually find there's a need to run them on separate VPS', which I find cleaner and safer than trying to make them run alongside each other.
But thanks for reminding me about ASDF. It seems like a much nicer solution than using multiple tools!
You have a nice simple solution to a simple problem. Do you just do a ‘recompile’ in the IEx shell?
Docker for me became easier after the whole switchover and not being involved in Linux land for a few years. Many parts that I’d built up muscle memory from years back are different. Now I login to a recent rev of a Linux distro and I have to spend time looking up how network interfaces are now handled, how to create a service, how to look at logs. Docker CLI has been more stable “api” for me the past few years. Though lately I’m doing embedded, so I just took a Nerves x86 image and got it running on a VPS. Get a minute of downtime when updating a rarely used service seems worthwhile.
> You have a nice simple solution to a simple problem. Do you just do a ‘recompile’ in the IEx shell?
Yup. I have a few umbrella apps where I'll do Application.stop() and .start(), and sometimes recompiling isn't enough, but 99% of the time it's all I need to do.
For embedded stuff I suppose Docker could be very useful too. Any reason you don't run Erlang 'bare' and use releases? Apologies if that question makes no sense; I have no experience (yet) in that area but the coming year I'm excited to start working with Nerves and various IoT stuff :).