Absolutely agree with the article. We do boring stacks at Stack Overflow (we are proudly a ASP.Net MVC/SQL Server shop) and our performance stats are pretty damn good.
Our average page is rendered in less than 20ms on cache misses [0] and we do over a billion page views per month with 9 web servers at 5% capacity. [1]
Sorry but it does get annoying when people suggest we "improve" our stack with $fancy_new_stack.
My product is built with C# .NET too. However I'm not bought into the Microsoft platform at all. Example: Nancy[0] for REST APIs, Postgres for persistence (with Dapper[1] of course), ElasticSearch and so on.
Recently turned off my last Windows Server, I've converted all of the services into docker (mono) containers and making use of Rancher[2] for orchestration.
I am also using Node.js as an API gateway which fits in quite well.
How about your front end Web stack? I've read a lot about the Stack Exhchange backend, but not much about the front end. How quickly does development move on the front end? How do you test the front end? How subjectively maintainable does it seem to be? Any pressure or temptation to use one of the hip JS frameworks (if you're not already)?
Not the guy but their front end looks pleasantly simple and framework less. They use jquery. You can do view source and read the thing pretty much unlike say Facebook. They apparently have written their own websocket system to update the stuff at the top http://meta.stackexchange.com/questions/168145/sockets-used-...
The only significant licensing cost we have is SQL Server, and yes, we have considered alternatives, like PostgreSQL.
On one hand we would reduce costs by going to Postgres, on the other there would be a significant cost retuning all our SQL so to have an acceptable level of performance.
Thanks to you and your team. That's very impressive especially considering only 5% (max 15%) load on the CPU. Are the systems underutilized in this case, if so is there a reason?
This what I like to call "both/and". I like both the old reliable and the new shiny. So often people want things to be "either/or". Either I'm going to use this solution or that one.
The key is to know the context of each, and I think this article does a very good job of describing when each ought to be used. He uses the boring reliable stuff on his own things, because he is beholden to no one and does not need to justify his decisions. And he also works with the exciting new stuff with clients, because it is simply easier to sell it to clients who are caught in the Silicon Valley echo chamber.
Both/And is like trying to hold a small bird in your hand: too tight and you'll crush the bird and it will die, too loose and it will fly away.
Its interesting to note that there is an inversion here.
You'd think that for one's own enjoyment, the new and shiny would be the natural choice, while dependable and stable long-term would be reserved for people who need dependability and reliability long term (a.k.a. Clients).
There is no inversion: new and shiny is what clients want, boring and dependable is what owners want. In this example he owns the boring products, and builds the shiny ones.
I think it comes down to how you want to spend your free time.
Some folks like to spend it building new things, and don't want to be hampered by having to spend time learning the ins and outs of $fancy_new_thing. Others like to spend it playing with new things, and don't like to be hampered by any technical limitations of $boring_old_thing.
I often hear this sentiment that new languages and frameworks du jour pop up every week, and that past an age a fellow just wants to learn a single reliable stack and collect a weekly paycheck.
Am I the only one who just hasn't experienced this feeling? There used to be Rails/Django and jQuery. Now there's Node and React. Both revolve around extremely simple ideas. Spend an afternoon reading the React docs, and you'll know everything you need. Take another afternoon after that and learn Redux. If you have another few afternoons, learn Clojure, and see what the Lisp folks have gotten going with reagent/re-frame. There isn't that much to keep up with.
It's the same with complaints about JS having too many transpiled dialects. These dialects make life easier. There are good ideas in them. For example, Livescript is wonderfully concise and elegant, at no cost of readability once you grok Livescript. It makes your code more readable and faster to write. How is this a bad deal?
I sometimes wonder if people are just annoyed that they need to learn new paradigms at all. It's not like ours shift at an especially fast rate.
I follow your reasoning for the most part, but there's one core problem that inevitably comes up: everything has quirks, and the quirks add up over time. Boring stacks are partly boring because they've been around for a long time, but they are also boring in the sense that they are predictable.
The problem is that in any suitably large project, you will inevitably cover enough use cases to run into a library or language's quirks. Browser support issue in this one corner, language ambiguity in this other corner, unsupported arcane combination of theoretically orthogonal functions in yet another corner, and then a slow inner loop in some other corner. Once you've added everything together, you get a big headache.
With more mature stacks, it's easier to anticipate these issues because they are well documented all over tutorial materials, knowledge bases, and just professional experience. They've also had time to iterate and address many of the design and polish issues that would slow developers down.
Yep. Most of my experience with these kind of issues recently has been on the client side: stuff works in one browser or, worse, one version of a browser, and not in another, or it works well in the desktop browser, but not the mobile version of the same browser.
These issues tend to be huge timesinks - I literally lose count of the number of times I've lost a couple of hours or more due to some weird quirk that would never have come up as an issue in plain old .NET.
That being said, in any suitably large project, the quirks in your own codebase will inevitably start to drain resources in the same way whether your stack is boring or otherwise. The advantage with boring is that, 99% of the time, if there is a quirk you know it's probably your quirk and something that's under your control to fix.
I don't know what age we're talking about, but I'm not as young as I once was, and I still love learning new stuff.
BUT
If I'm going to invest time in something, it's going to have to pay rent. It's got to serve some real purpose, to give me something I didn't have before I learned it. And that rent is going to have to cover the initial investment, and then some.
Most new web frameworks don't pay rent.
When I was younger I always wanted to make games. I'd spend weeks building a fancy game engine that would let me make the game... and then I'd burn out, before ever actually building the game itself. Those game engines never paid rent, other than what I learned from implementing them. If you're just learning someone else's API which then becomes irrelevant a few months later, then you don't even get that payback. You just waste your time.
Sure, you can read React docs for an afternoon and get a basic feeling for it.
Then there's the million fine points to master. Like the minutiae of React.propTypes. Or how to use {' '} to avoid having spaces rudely removed. The fine details of react component lifecycle, or how you have to set "className" instead of "class", or ... ...
The rough high-level idea is ALWAYS easy. The devil is ALWAYS in the details.
It is not exactly the same though. is a non-breaking space, which means the browser will keep the elements together on the same line. With a normal space the elements could be rendered on seperate lines.
> past an age a fellow just wants to learn a single reliable stack and collect a weekly paycheck
That's a bit too strong. Surely there are people like that. But I'm "past an age" and there's always some newish thing I'm learning. However, I'm in no rush to start adopting them for real projects. I want to know the payback from switching, and neither hype nor the word "modern" cut it. I want to know what this language is good for and what it's not so good for. So I learn things and evaluate them, and over time I add things to my working set of languages and tools.
It is easy to learn new languages but as the previous commenter mentioned it has it's quirks and when you add in the complexity of working with a group of developers with different beliefs and experience levels it adds up. After a point, you are just dealing with the quirks of the framework/language than the business problem.
I think the improvement new stacks are bringing are getting more and more marginal, compared to the effort required to learn them, and instead of learning those new framework, I would rather spend time learning about new fields in programming; for example the things I am interested are, in no particular order: neural networks, compilers, 3D graphics, high performance computing, and maybe crypto and OS development.
>I often hear this sentiment that new languages and frameworks du jour pop up every week, and that past an age a fellow just wants to learn a single reliable stack and collect a weekly paycheck.
TFA clearly and repeatedly states the author learns new tools and frequently employs them for client work. The point of the article is that boring/mature stacks can offer a better ROI for in-house products.
Switching frameworks for any large system is non-trivial not so much for the technical effort necessary but for the political effort.
For any large enough system (if it were small, you could make changes and nobody would care), you're going to have to persuade the stakeholders why they need to switch. That's not something you can do with a low-touch message to everyone in the project. That's a "let's sit down with people, listen to them, figure out how we can find common ground" high-touch strategy.
Even if you get everyone on board, it still took a long time.
Well, I don't know if "boring" is the word I'd use, but I like _transparent_ stacks: if I make a mistake, it tells me where I made a mistake in a way that I can interpret. All of the "boring" stacks have this property.
I think this is the biggest selling point for Functional Programming. You know given these inputs, you will always get the same output.
Even if not using a Functional Programming language, it is good practice to write your code to follow this principle whenever possible. I think this is what mocks and unit tests try to achieve, creating a scenario where given certain inputs, you know what the output will be.
But I think there is a point worth noting that nikanj has alluded to.
I think the FP languages of increasing popularity (not necessarily simply because they happen to be FP) care about developer interaction with the tool. FP is supposed to be a better tool (not because I want to flame, but from the perspective of its developer), so why are we bragging about all this type info and not using it to convey more relevant info to the developer with less cryptic and much more precise error messaging.
I listent to programmer interviews a lot, and this obviously comes up with Rust a lot, and the original dev of Elm to a very siginficant amount. He is most proud of that.
I agree: the languages that are winning hearts and minds are those requiring less and less mind to use, thus growing heart. I mean if this crowd will allow very cheesy imagery.
Don't confuse correlation (some functional programming languages out there don't have good error messages) with causation (functional programming languages don't excel at good error messages).
I might be the odd one here but I think ghc has great error messages, but it's written in an academic style that takes more mental effort to parse and understand.
Empirically, that's probably true, but probably only because functional programming is usually more popular with fiddlers and academics than with get-shit-done developers, so there's simply less time spent on improving developer experience. I can't think of any inherent reason why functional languages would have worse error messages. Elm prides itself in its (compiler) error messages, for instance:
1. Calling a function with the same input sometimes gives you a different output.
2. Calling a function with the same input will always give you the same output.
Which do you think will more reliably give you correct outputs?
> I care about correct output
That's why we unit test. Well, and to protect the requirements of the system from the engineers who change the codebase. Referential Transparency (the concept described above) is a boon to this.
I'm having flashbacks about Angular 1.x error stack traces, and their overall complete uselessness, as I write this. Imprecise, inaccurate, or downright misleading errors are the bane of my life. Please: tell me what went wrong and exactly where it went wrong.
(As an aside, this is why I'm no lover of the Arrange-Act-Assert test pattern, or of BDD using specflow: both great for telling you that something is wrong, not so clever at giving you a clue what that might be or where it happened. Gurning time vampires the pair of them.)
It's a great point--it just seems arbitrary to draw the line in the sand of having a "boring stack" that consists of SQL Server and C#.
I mean, that, as a minimum, means you're running and maintaining a Windows server, along with antivirus, backups for the OS, and backups for the database. Any time Windows updates come along, your service will be unavailable. Every couple of years, you'll need to buy a new license, upgrade Windows, and make sure everything comes up smoothly. This likely will involve fixing and troubleshooting some issues. You'll need to have some monitoring set up to make sure you don't end up running out of disk space on Saturday morning.
Or maybe you have more than one Windows server (to avoid downtime from a single server), but that makes your stack decidedly less "boring." Now you're dealing with at least a load balancer and making SQL Server redundant.
I mean, I guess it's more boring than chasing the latest front-end reinvention or maintaining MongoDB, but there's some room for improvement here, clearly. Containerized deployments (via Docker or a "serverless" architecture) might be able to help.
Don't get me started on chasing down NuGet dependency errors. The only difference is there's usually only one option for any given task. We're constantly held back by packages with conflicting references. I'm not jumping everything to Angular 2 just because it's available, but I can build my 1 apps in a fraction of the time it takes to use my old MVC boilerplate.
Seriously, good package management makes a huge difference. Long term maintainability really benefits from a package manager that you don't spend hours diagnosing obscure dependency errors. I do not miss this aspect of developing in C# at all, even if it was great otherwise.
>along with antivirus
I've never heard of anyone seriously running antivirus on servers _except_ in the type of environments where it's mandated by terrible policies. Policies that probably aren't going to excuse lack of antivirus on Linux.
So yeah, if you're dealing with the type of person that runs antivirus, then everything else will probably suck.
I'm also confused what Windows has to do with an HA DB setup. Or do you mean compared to a cloud service, which has even more complexity, just hidden?
> So yeah, if you're dealing with the type of person that runs antivirus, then everything else will probably suck.
/shrug. I don't use Windows, but my understanding is it's fairly common including in server environments. I can't imagine taking money from people and storing their data on a Windows server without it having some kind of malware protection.
> I'm also confused what Windows has to do with an HA DB setup. Or do you mean compared to a cloud service, which has even more complexity, just hidden?
Well part of it is that SQL Server makes it tougher for a "boring stack for my personal projects" because it's pretty expensive if you want that. It just adds on to the "not boring" aspect of it. Admittedly my understanding is that there's nothing "boring" about a redundant relational database on Linux, either.
There are cloud services that help with this (some which manage that complexity, and others which abstract it away from you), but both of those options are probably more "boring" than maintaining your own SQL Server instance.
>I don't use Windows, but my understanding is it's fairly common including in server environments.
If you don't use Windows then its rather odd to opine about something you have no experience in. I guess we have a different view on the words "my understanding".
> I can't imagine taking money from people and storing their data on a Windows server without it having some kind of malware protection.
Okay, perhaps you can't imagine it, but again I don't quite see your point considering you don't even use Windows. I've been using Windows without an antivirus for over a decade without a problem. Maybe its recommended for non-technical users, but then again, as a server admin you hopefully know what you're doing.
>Well part of it is that SQL Server makes it tougher for a "boring stack for my personal projects" because it's pretty expensive if you want that.
SQL Server is free for developers. Also, IIRC SQL Express (also free) can be used for commercial apps.
Although my experience is with the AWS equivalent, me too--absolutely. Good luck keeping your comment visible here by suggesting "serverless" in a thread called "Happiness is a Boring Stack."
How much more boring could it be - a stack that is not a stack. Frankly, the only (and small, mind you) concern that I have with "serverless" is that it might make software development boring.
> I mean, that, as a minimum, means you're running and maintaining a Windows server, along with antivirus, backups for the OS, and backups for the database.
By contrast, if you're on the LAMP stack then you won't need a Linux box, and since you don't have a server you definitely won't need to secure it or back it up. It'll all just run on clouds or something, I guess.
No, the point is relative to being containerized, where you're deciding ahead of time what needs to be backed up (outside the container) and what doesn't. This leads to backups and restores being easier and more "boring."
It's also counter to one of themes of the post, which suggests by choosing this boring stack, he can just leave it alone for months at a time without touching it (presumably relative to other stacks). You can do that--but it's not like you're not still on duty for making sure these things are still working and happening.
EDIT: For clarity:
You can set up a multi-AZ MySQL database instance on AWS, with whatever backup schedule you want, using a single API command. That is decidedly doing a lot of things under the hood, but I don't care--it gets done, and the backups get made to S3, and I don't have to handle it myself. It even (optionally) does minor updates for me and swaps masters automatically while doing them. To me, this is a great example of "boring."
Now compare that to setting up and maintaining something similar with SQL Server. It doesn't mean I don't have to secure or backup my MySQL database instance--it's just a whole lot more "boring" in the good way.
At least in the case of the database, I don't think I'd ever be willing to accept outside-the-container backups as a sufficient strategy. Not if I'm in a position to be held responsible for data loss, anyway. But that's me and my baggage.
That said, if you've got other reasons for wanting containerization, MS SQL Server can be containerized nowadays. Or you can choose a slightly different stack if you've got other reasons for wanting to not use MSSQL (cough cough price cough). Or you can shove it all into AWS or Azure or whatever and get on with your life, same as Linux. It's all good. Maybe the article's author is doing none of that, and that's fine too. Boring and old is a subjective thing, and has a lot to do with what you already know and have down cold. On the other side of that coin, it takes only a little bit of unfamiliarity to create and/or perceive something to be an unmanageable mess.
Yep. Sorry--my original point is getting derailed. I was responding to someone suggesting that it's ridiculous to take into account things like backups and security because those things are necessary on any platform.
The comparison was to illustrate that although those concepts exist on any platform, comparing the maintenance / backup / security of a database server you maintain yourself versus a managed instance shows significant differences.
omg this guy is on spot. My day job is Magento (hell), but I am absolutely in love with boring ol' C#, along with the new ASP.NET Core framework & Kestrel web server. All of my side projects are built using C#, and I don't use jQuery or Angular or React or Reactive. I use plain old vanilla javascript, because I care about zero HTTP requests & zero bloat & zero BS.
They also tend to turn into huge messy hairballs after years of developers and new business use cases. Not that these are use cases for personal projects, but standardized frameworks are valuable for large teams with lots of changing use cases and long-term maintenance requirements.
I don't see any long term stability and support in any JS framework. They seem to pop in and out of existence at a fairly rapid pace. What examples do you have of long-term projects that use those frameworks?
http://backbonejs.org/ is my favorite framework. It doesn't do much, I continue to be impressed with it daily. I'm on my second major project with it. It's getting old now in JS years (read, actual years). Doesn't have any steam behind it compared to ember, react, angular. But it continues to work daily
One of the first React workshops I was a part of was put on by Henrik Joreteg. In the first five minutes he said something along the lines of:
"React has its place, and I mean Backbone still works and is a fucking amazing library."
Then went on a mini rant about how people abandon libraries because they're new and shiny and its not because they're bad or deficient in any way - which is why he used his Backbone example. It's still awesome, it still works incredibly well and is still very much supported by a large, well informed community.
Then of course, he went back to telling us how awesome ReactJS was and why we should want to try it and use it for our projects.
What boilerplate? React is all the rage now and it looks a hell of a lot like a backbone view. It has an initialize function, render, events, some constants, and the handlers for events. I don't know how much more minimal you can get than that.
As for the unmaintainable part, I couldn't disagree more. Maybe you've worked with poor programmers?
Really it is the requirement to to redefine your model in backbone that is a deal breaker. Alot of code that just don't have to write in other frameworks, although i understand why it is there.
Most of the unmaintainable comes from it being a lighter weight frame work so devs often fall back to jquery. This is OK if you can figure out exactly where they added that trigger. Way to often that is overly hard. It could be pages away.
And yes, most of my coworkes are poor programmers. Saddly that is the world we live in and I like frameworks that make them a little easier to live with.
The issue I've seen with backbone apps, even written by "good programmers", is that no two are alike. So, when you come into a new app, a nontrivial amount of time is spent figuring out how the original programmers are using it, which libraries they pulled in for stuff like model validations, etc.
To be fair, that's somewhat due to the fact that I mostly encounter them in consulting arrangements, and I don't know that any of the other frameworks are much better in this respect. I think I've just gotten spoiled by doing most of my work in Rails, a mature opinionated framework.
Anything open sourced by Facebook, Google, Microsoft, etc. and used in their own applications. Those will probably see ongoing development for a while.
Well, Angular 1 still seems to work pretty well, at least for now. It's not a requirement to upgrade to Angular 2 yet, and it's not like Angular 1 apps will suddenly break and lose all their functionality, at least not without intentional decisions made by browsers to not support it.
I've seen plenty of Ember apps succumb to the same syndrome- the problem is the developers, who are either incompetent or contractors who care only about the deadline before them. I don't think a framework is a magic cure for bad developers, though maybe because you wrote less code to start it probably does take longer to get to spaghetti hell.
Sure, but now the use of rich javascript clients enables a bunch of use cases and workflows and interactions that was previously not possible. It has enriched the user experience, some times unnecessarily, but thats down to the implementers not the technology.
This depends upon the business case. Some just do not permit reloading the whole page for every server interaction. Very interactive multiplayer games come to mind.
I'm with you for the most part, but working out browser inconsistencies without jQuery's (or similar) help is painful. Admittedly, not as much of a problem recently.
Wait C# (especially with core and Kestrel which are brand new) is a boring old stack to you and php/magento is a cool new stack?!? Php is as old and boring as it gets. I worked on php 18 years ago, C# is far younger than that and core and Kestrel is as new and shiny as almost any JS stack.
Well, you are working with Magento, so pretty much any technology would be better. Magento is seen as the "safe" choice for many, when picking an ecommerce solution. I assume the reason is "everyone else is using it", and while that true, I don't believe that many on the technical side is actually enjoying it.
Magento is non-boring in the same way a plane crash is.
I agree in general but why would you use Windows/SQLSrv for internet-based products? I'm fine with it on the desktop, but the extra costs and time devoted to things like activation are a non-starter as far as I'm concerned. I believe they finally have ssh and package mgrs available, but don't you still have to install them manually?
There's plenty of boring stacks on Unix that are free, in both senses of the word.
Because it performs well, and the cost of Windows licenses ends up being a rounding error if you have paying users. I've administered Linux boxes for years, and while I enjoy it I find Windows Server more enjoyable to work with in many cases.
SQL Server licensing is another story. It is expensive, but I've also seen it handle heavy workloads on modest hardware without complaining. Though I don't doubt the same is true for Postgres and MySQL/MariaDB/Percona. From what I've seen, the constellation of reporting and BI tools around SQL Server are what make it appealing to enterprises regardless of cost.
Windows being a rounding error is kind of a myth perpetuated by people that haven't been in the position to buy Windows licenses for a company.
First of all most software developers are not working in the consumer space. And for B2B you often get asked to have an on-premises option, either because the company doesn't trust you with their data, or because non-stop Internet connectivity can be a problem. And when deploying on-premises, the cost of the Windows licenses do matter a lot.
If the clients are just web browsers then you can save the company a lot of money, since the clients can be just terminals powered by whatever you can get your hands on. But if you assume Windows clients, well, the costs can be huge.
And on the server side, remote maintenance is way cheaper with Linux boxes, because well, you can control anything on a remote Linux machine, securely, reliably and for free. Add to that the licensing cost of Windows, which isn't cheap because you can't run a server on Home edition.
Basically if your solution assumes Windows in any way on the customer's side, you're just adding unwanted cost that could have been your profit. And SQL Server is simply unjustifiable.
> And when deploying on-premises, the cost of the Windows licenses do matter a lot.
Sort of. My understanding is with Datacenter, you license the virtual host (per-processor cost) and can run as many VMs on it as you want. So the incremental license cost of is zero, if you already license that way.
Other editions have some limits, so "it depends". But it's not just a "your solution uses Windows, so your customer must buy a license".
The other factor is support cost. If you're selling to an organization that can handle Linux, that is probably cheaper. If you're selling to an organization that only has Windows expertise, it's probably going to suck a lot of your support time to get them running on Linux -- and even though you may be able to do remote support more efficiently than you can with Windows, you're still going to be doing more of it.
Well my experience is that we've had to always provide support. I had to deal with customers running Windows, but those customers that lacked a capable IT guy didn't have the actual know how to actually maintain a Windows server. And when that server crashes due to poor maintenance, guess who gets called.
Basically as a provider, you can never trust the customer's capability for maintaining the server that your solution runs on. It's much better to decrease your costs for delivering the required support.
Otherwise you're going to find yourself guiding somebody on what buttons to click in the Control Panel, for getting the damn network to work, on the phone, on a Saturday.
This has always been a good reason to stick with MSSql. In future I think dropping a postgres docker image on their windows server may become the norm though.
But that's not specific to Windows servers. Have fun guiding the non-IT guy at the other end of the line to install that important OpenSSL update on a Linux terminal.
I apoligize; I should have explained what I meant more clearly. Like the author of the original post, I'm writing from the perspective of running a relatively small SaaS app.
I've had to use both Windows and Linux VMs on Azure, AWS, and elsewhere for applications like that and I haven't found the cost of Windows boxes to be prohibitive.
In other circumstances, such as the ones you mentioned, you're definitely correct. I was only trying to explain why one might choose Windows, not why everyone should. :)
You've got nothing to apologize for, you've shared your experience, I've shared mine. And don't get me wrong either, I'm all for "use what you know". Plus speaking of Microsoft's dev stack, they seem to care less and less about Windows and .NET Core is very attractive imho.
In my experience dealing with boring corporations, that's another myth. Yes, they run Exchange. But all new developments I've seen happen on Linux boxes, precisely because it's easier to outsource IT.
Perhaps it depends on how boring/conservative the corporation is. At a past job where we worked with big banks, they'd sometimes look at you suspiciously if your application didn't run on Windows Server.
That's just anecdotal, though. Maybe we were just unlucky and worked with odd customers.
>And when deploying on-premises, the cost of the Windows licenses do matter a lot.
I'm not sure why you'd pay for licenses if the company is forcing you to have your product deployed on-premises. You would be an individual owner among the rest of the licenses which are owned by the company. That just sounds really dumb to do for both sides of the deal. Is this normal?
I don't think he was going to purchase the license for the customer. The point is that the license cost contributes to the quote, no matter who purchases it in the end.
In activation, if you're dealing with more than a couple servers, you've almost certainly got volume licensing and a multiple activation key, which means that your install scripts will automatically update for you. Likewise, installing features like ssh and packaging can be automated as well. A lot of the things that people criticize windows for haven't been true for a long time, especially in the enterprise/service area -- there are a lot of tools out there for making things just as easy, if not easier, than *nix.
Activation? I do development in both stacks, and there are pros and cons to each. For Windows, the biggest con is really cost. If you're comfortable with both stacks the difference in setup/maintenance isn't very different.
However, the author makes a good argument about boring stacks in general. Your boring Linux stack will be much easier to maintain than something on bleeding edge.
Not that chasing after the latest and greatest is always a bad thing, but sometimes existing (and relatively boring) tools are the best ones for the job at hand.
>Ruby isn't cool any more. Yeah, you heard me. It's not cool to write Ruby code any more. All the cool people moved on to slinging Scala and Node.js years ago. Our project isn't cool, it's just a bunch of boring old Ruby code. Personally, I'm thrilled that Ruby is now mature enough that the community no longer needs to bother with the pretense of being the coolest kid on the block.
This is my feeling right now. Nobody is yelling about Ruby so I can take my time and enjoy learning it at my own pace.
This is how I feel. I have friends that are always going on about all the latest "best practices" and cool hipster stuff like docker swarms and microservices. Meanwhile I've kept my own projects using boring old (or rather tried and tested) technologies and languages like python.
I don't like it when people are recommending new buzz words every week.
Unfortunately the new buzz words of today are the boring stacks of tomorrow.
I would say that containers represent a greater leap forward in thinking than a new programming language. Like it or not your future applications will run in a linux container.
> Unfortunately the new buzz words of today are the boring stacks of tomorrow.
Bullshit.
SOME of today's buzzwords will be boring stacks of tomorrow. But unless you're a some sort of oracle, chances are you won't be able to pick which are today, for the same reasons you shouldn't try to beat the stock market.
I don't think it's that hard to pick promising tech to explore once you have some experience under your belt. For instance, containers are super interesting to me because they directly address a lot of the pain points with dev VMs or native development. It may end up being the case that Docker does not win this battle, but it's also clear there is some meat to this hype and I will profit from learning about it even if the landscape shifts dramatically before it stabilizes.
To me you're proving his point. Containers are new hip thing right now, but it's huge amount of work to productionalize them. There are so many solutions and all of them have drawbacks. Seeing how things are going, most likely it won't be docker in the end, so the amount of effort you spent on it won't give you any edge.
From my observation it seems like docker ultimately is ending being used as another package mechanism.
I don't see how you take it that way. Experience with containers will be very valuable over the next decade, and anyone with experience will have an advantage over someone with no experience. There's not one binary moment where all of a sudden technology X is mature and stable. Docker has been around over 3 years, and the underlying technology many years before that. Although Docker is having trouble on the orchestration side, the container format is quite ubiquitous already, so I'd say it's highly unlikely that Docker will completely disappear in the medium term. Finally, it's not all or nothing, Docker solves a lot of problems today in certain configuration, you don't even necessary have to productionize anything as it's quite useful in purely dev scenarios. Even in a worst case scenario where Docker becomes completely irrelevant, you've learned something about what works and doesn't work with containers.
There's obviously a choice where you want to pick things up on the adoption curve, so maybe you're more conservative than me, but my underlying point is that it is possible to find a balance and you can still advance your skillset even if you pick something that doesn't necessarily stick around for the long haul.
> Unfortunately the new buzz words of today are the boring stacks of tomorrow
Not at all. Contrarily to many other previous innovations (e.g. virtualization) the current wave of hipster tools like Docker and the Hashicorp stuff are often looked with suspicion by engineers in large tech companies.
One of the reasons is that this stuff is heavily hyped up. Around hipster crowds adoption is more driven by "awesomeness factor" that a technical, rational discourse.
It is still worth it to delay adoption by a year or two. By then, early adopters have already stepped on most mines, flaky fads have crashed and burned, and the real winners of next generation stack-wars will have coalesced around something reseabling sustainable best practices.
What you said was standard practice for savvy buyers of Windows for around a decade. We always told people, "Just wait till at least SP1 to buy it. You won't regret being a late adopter." We never did regret it.
I agree. Letting things shake out is an important part of not losing your hair. At the very least you come into an ecosystem with mature third-party libraries.
And documentation, or at least answers on Stackexchange. Documentation often lags behind on these new projects, especially when they're in a state of flux and you can easily find yourself pulling your hair out because the docs are for two versions ago and wrong now.
I strongly disagree. There's so much flux in the new stuff that it seldom reaches the level of stability to become boring before its critical mass (if it ever had one) surfs on to the next cooler thing.
Containers are quite orthogonal to stacks. You can do LAMP, Rails, Django, Struts, MEAN on containers. It disrupts some parts of the development environment but it's back to boring pretty soon.
However if you start running your containers on AWS Lambda or the like then your stack is definitely shaken, if you can port it to there and you don't have to start anew.
• If you mean "familiar" then you're right: something you know is better than something you don't. (But it's not the way to learn or expand your horizons.)
• If you mean "stable" then the whole thesis is a bit of a tautology—crucially, there's no fundamental contradiction between "new and shiny" and "stable", and "boring" stuff is often also unstable/insecure/bug-prone (think PHP or C).
• If you mean "popular" or "mainstream" then you're putting way too much stock in the judgement of crowds. Crowds and fashion are fickle and the popularity of a product is a poor indicator of any intrinsic qualities.
I've seen people using all three of these definitions—and linear combinations thereof—when talking about these things. More too, probably.
I think it's important to separate them out exactly what you mean by words like "boring" or "practical" when talking about software tools and abstractions to understand exactly what's going on and to communicate clearly.
That's why I chose Elixir for our product, and am so glad I did; it may be shiny and new, but it's dead simple.
The "boring" familiar choice would have been Ruby / Node, etc.
I think the problem is when people jump on shiny new bandwagons just because of the shiny factor. When instead they should ask: "Does this shiny new technology radically simplify something that is currently complex and is at the core of my application?" (again, going with the above talk's definition of "simple")
Maybe "stable" was the wrong word. Perhaps "proven stable" would be better. It's not that "new and shiny" can't be stable, it's just that it hasn't proven itself stable yet. It hasn't been around long enough to prove it. So it's more using technology that has over a decade of proven dependability, over something that may or may not be.
Elixir, Clojure and Scala cut across this distinction. They're very new and shiny but built on proven technology. In that sense maybe they're a better bet as they definitely bring something new to the table and help solve hard problems. I think Clojure suffers most from this familiarity bias as I'd argue it's at the top of the excellence tree despite being largely ignored by an industry forever wedded to C# and Java.
I've flipped-flopped back and forth several times over the last 15 years on the question of server-side vs. client-side view logic. I literally start each new project with a new technical review and evaluation of this question.
I'd have flipped permanently to client-side if the quality and consistency of the JS stacks were of the same quality as the server-side stacks (I use ASP.NET MVC). I know that it has been a rapidly evolving thing and hence the churn. But here's the thing - with software I've come to prefer intelligent design over Darwinian evolution. On the server-side we have intelligent design; on the client we have evolution. On the server, we have stability and a roadmap. On the client we don't.
With that said, painting pages on the server to send down to the client just doesn't smell right anymore.
I've settled on a halfway point being my ideal. The server will render a "page" but knockout will take over and render the interactive parts before it's visible to the user. Applications then become a collection of tiny single page apps.
By that I mean that we should have moved cleanly beyond that by now with no regrets.
Certainly it comes from my time in the '80s and '90s developing "desktop" software (of course we didn't call it that)on Unix and Windows. There were user interface objects that had behaviour. Once we "got" that HTML and JavaScript could do similar, we make the browser into the new windowing system. But unlike the Windows and Unix desktops of old, which had "official" SDKs and libraries, it was the wild west. Fifteen years later it still is.
If fifteen years ago you had said that one starts a new project by running a command-line script that would clutter your project with hundreds or thousands of tiny source files from sources unknown, we would have laughed at the absurdity.
I personally enjoy the new & fancy. Infact, I try the new stuff because I enjoy trying it out, and the "gotcha" moment with that happens when you understand the stuff.
That said, If I had to choose a tech stack for work, I'd choose something that I am comfortable with and something that I know a lot about (compared to the rest of the stuff available) - because in the end, it's not about having new and shiny so I can blog about, but to solve the problem that my company is facing/trying to solve.
There's an important difference: the new and shiny may be something that increases your productivity. The increase may be so big that it outweighs the risks of it not being mature or well-known enough.
See Paul Graham's essay on Lisp as a secret weapon. There were a number of now-popular "secret weapons", most way less elegant than Lisp (e.g. Rails), used exactly because they made your product's time to market much shorter.
Getting to the market quickly, while an opportunity still exists, is one of the most important problems any company is going to solve. Operational excellence may take a back seat compared to it.
My counterargument is when rails came out PHP was just as capable of letting you knock stuff up quickly and meeting the business goals for a startup.
Where rails excels I think is at shops that need to pump out different websites for different clients so that continually starting over cost is minimised.
Rails, like all opinionated frameworks, gives you a great head start for a new project, as long as you follow the prescribed project structure. (It starts to feel worse as you grow and hit the restrictions of the baked-in assumptions.)
Lisp is not "old tech"; Lisp is timeless tech. Like, well, math. BTW, Unix is also rather old, but still does remarkably well, and gave a distinct edge to its users since ~1990s when Linux and FreeBSD became viable server platforms.
I totally agree! That's why I check out the new stuff in the first place. It's not just "keeping up with hype" as most people say. Generally, new tech comes up with it's own way of doing stuff (which may be exactly what you need to solve your problem).
It's not just comfort, it's that stuff that's been around the block a few times tends to fall into one of two categories: Stuff that's solid and stable and not very fussy, and stuff that nobody uses anymore because it's a PITA. That makes it pretty easy to make good technology decisions.
Newer technology tends, almost by definition, to be very experimental. And experimental means, almost by definition, that some ideas are going to work out and others aren't. Oftentimes it takes a while to figure out which is which. And popularity, even extreme popularity, tends not to be a good proxy for robustness - if anything, it's a source of noise that only makes it harder to figure out what's robust because hype tends to be a pretty darned autoregressive process.
Sure, but longevity alone is an insufficient criterion. There are a few venerable, perfectly adequate, stacks (for lack of a better term) that I would rather not start new projects with.
I say this as someone who once had to argue forcefully to start a new project in .Net while it was still in beta.
I agree. Kind of. You get to choose the kind of problems you work on, and the way you break up your time. If you want to innovate on the infrastructure, if you find that interesting or if worth the gains for the kind of scenarios you work on then go for it. At the same time focusing on solving problems in the programming infrastructure, spinning cycles learning new techniques means you get less time actually solving problems in your actual problem domain. You don't need a $3000 carbon fiber road bike to get to your friends house down the street, no matter how many articles you read about how good that road bike is. I suppose the idea is use the most boring stack that can get the job done.
I'm in between jobs and thinking of starting a SAAS on the side in order to support my goal to go backpacking for a few months next year.
However, the amount of microservices I'm thinking of putting it all together with means it'll require a decent amount of non-automatable (as far as I can tell) maintenance and attention. This isn't exactly what I want at the back of my mind all the time while travelling... but then again, you can't have all your cake and eat it.
Anyone who has a successful hands-off SAAS built on a modern stack care to chip in on ways of mitigating this?
Write it as a monolith. If you get to a stage where it needs a bigger server than you can afford, run a profiler on it, and fix up any glaring performance problems. Then if it gets wildly successful and does actually need microservices, then you have the resources to pay for the development & sysadmin overhead.
Also by delaying as long as possible, you'd have the maximum information and maximise your chances of getting the split right.
Elixir is a solid solution for this. You essentially write micro services as a monolith that runs on the BEAM VM and naturally spreads across a cluster (https://elixirschool.com/lessons/advanced/umbrella-projects/). It's all message passing without shared memory so whether you're sending something to a different function, process, processor or machine in a different datacenter your code doesn't really care. As a bonus you don't have to define REST/SOAP/Protocol Buffer interfaces for every single microservice you're writing...since it's just function calls.
Everything works smoothly and efficiently just like boring Erlang has for decades, with some readability and productivity perks that Elixir gives you.
Elixir sounds trendy and new, but it's just modernized Erlang. It's older than Linux.
Yes, I loved Elixir as soon as I set eyes on it. However, as far as "easy to reason about" goes, Clojure in an Emacs CIDER repl is about as good as it gets.
How about don't go with a microservice arch. Nothing wrong with tried and true monolith Django, Rails, or [your favorite stack].
In fact, sounds like this is your first time out, in which case I would absolutely recommend minimizing the headache and complexity and go with a simple monolithic stack. [insert rant about premature optimization]
In any case, build your SaaS, go backpacking. Both invaluable experiences.
Expanding on the "both/and" advice from elsewhere in this thread, I'd recommend saving for the backpacking trip first with a boring-yet-lucrative consulting gig, then building the SaaS product during your downtime on the road.
It'll fix your expectations and your finances all in one go. SaaS businesses take at least a year or so to ramp up to the point where they can reliably support you (even on the beach in Cambodia). It'd be a shame to have to cut your trip short because the thing you'd built wasn't immediately successful.
With luck, it'll be the thing that finances your second lap around the world three years from now.
[edit] And yeah, don't build microservices. Especially for a B2B SaaS. Scaling problems for products with a price tag are things that come with tons of money attached. Use that to scale when and if it becomes a "problem".
For now, build a boring Rails app on PostgreSQL or whatever has the least chance of falling over while you're halfway down the Inca Trail.
Organize your microservices as packages, then run it as a monolith. Host it on Heroku. Now you have no ops, and no automation needed. Once your business grows to the point where you need to scale, you can easily split apart your monolith and you can host it on AWS (Heroku makes it easy to migrate your database to AWS).
There aren't many SaaS services I can think of that can be truly hands-off to the extent that you can backpack with a free mind. Also scaling a SaaS business takes time and significant effort.
All of the above is from personal experience.
If I were looking for passive income from software I would've looked at desktop software.
Nice takeaway: "The nice thing about boringness (so constrained) is that the capabilities of these things are well understood. But more importantly, their failure modes are well understood."
"It seems like on a lot of stacks, keeping the server alive, patched and serving webpages is a part-time job in itself. In my world, that's Windows Update's job."
I imagine the Linux alternative of this would be Debian with unattended-upgrades and a PHP app.
Sounds about right. I run Tiny Tiny RSS with Postgres and Nginx on Debian Stable and I can't remember the last time I've had to spend time administering it. (I do update TTRSS itself manually, but since it's behind Nginx's basic auth, I don't feel rushed to plug security holes).
The truth is, as shown by data, that downtimes is usually caused by developers, not stuff crashing by itself.
PHP/Java/Perl. Python used to be in the list before every body in the neighbor wrote their own framework and/or different flavors of library that solve the same problem :p.
"Good old C#, SQL Server and a proper boring stack and tool set that I know won't just up and fall over on a Saturday morning and leave me debugging NPM dependencies all weekend instead of bouldering in the forest with the kids."
Microsoft tax.
Yes you can get alternative C# implementations [0] but SQL Server? Having to deal with "MS Tools" in an era of high quality alternatives? Boring for me would be Perl/Postgres, both in their 20's, solid, reliable, with plenty of quality developers and language support.
At my last job the backend system was mainly written in Perl, finding developers was nearly impossible. Of course, it may have to do with the fact they paid less than other companies and did not allow remote work.
This is my feeling about Rails. I've created several Rails apps that delighted the customer, and most have required literally zero maintenance after years of use.
Sprinkle in a bit of jQuery and you have an app that is stronger and just as fancy-feeling as a mess of a massive client-side app.
I think the improvements to turbolinks in Rails 5 make pjax more of an option now. Also, for scaling remember it's not the end of the road for Rails when you're running out of processes to keep up with requests. JRuby 9000 has been gathering a lot of momentum recently and promises huge performance gains when Truffle and Graal kick in. Migrating a Rails app to JRuby is not that difficult.
I really like C# and .NET as a stack, just like the OP. It may not be cool but it works well and they are good skills to have for stable employment prospects.
The new .NET Core stuff is looking very promising too. I really hope it takes off, and not just because I have a book out on it :).
On top of that, it has a powerful, well-documented IDE and support community. It is what it is. I love to experiment with new stacks and paradigms but if I want to Get Shit Done, nine times out of 10 Im going with older tools.
If you are doing a side project you can choose between solving a real problem or learning new tech. It is hard to do both simultaneously in the time that most people give to side projects. Hell its hard even if you are full time.
Learning new tech in a side project is a worthy thing to do and I have built some nice little toy Haskell, Elm, JS and Java projects.
However I am more productive in my usual C#, so if I wanted to get something done quick e.g. an MVP website, I'd probably that would be the best choice despite the advantages of other languages. E.g. PHP -> cheap shared hosting, easy to deploy, Haskell -> Excellent type system, find most bugs at compile time, etc.
One exception is I recently want to scrape the HN Api sequentially with multiple requests at a time, and I found this easier on Node.js than C#, because I could be sure I wouldn't get exceptions saying my threadpool has run out of threads etc. :-)
I like the point about building systems that take care of themselves and are largely hands-free. I'd say that applies regardless of platform (or "shininess"), though. The author has clearly internalized one of the more important lessons in engineering in terms of designing for maintainability (with an ultimate goal of zero/low-effort maintenance and/or extension).
I'm not sure the platform itself is as important to achieving this goal so much as the decision-making ability of the engineers themselves, though. Maybe a tendency to pick shiny because of shiny is just a way that poor decision-making surfaces? However, I don't see a problem picking an appropriate solution that happens to be shiny.
MS SQL Server is so awful crap, you can just try any another DB to see that - I can't trust this author after "good old SQL Server". And I don't see nothing except boring oldman grunting in that article. Programmers should always learn and they know it from the beginning of their way.
I never push SQL Server to hard, but it have always been stable and extremely well documented. It's certainly better than MySQL in many respects.
Would you mind sharing a bit about how SQL Server has failed you? I would genuinely like to know, maybe there's some use cases where I need to at least be careful about picking SQL Server.
And yet it meets the key requirement: it's up and running, stable, for years, growing and (presumably) profitable.
Sure, they could scrap the whole thing and rewrite it in Grails, and add Rabbit 0.3 to make it run faster, but why scrap a working, stable profitable stack in pursuit of some shiny bauble?
Maybe yes, when you work at bank, change 1 line of code per month, documenting this change in 10 Word documents, most of time drinking coffee and sleeping during Powerpoint presentations with 1000 attendees.
You are not being paid to change 1 line of code per month. You are being paid to be a living repository of institutional knowledge: so you (can) know (in a finite, bounded time) what line to change in order to achieve the business requirement, and what dozen-or-so Word documents you need to read in order to confirm which unintended consequences may result of that change.
That said, sleeping during massive presentations will hinder your abilities as a living repository... so, hang in there and keep pouring-in the coffee.
F# is an amazing language on the .NET stack that is strongly/statically typed but will infer many of the types. It has TypeProviders which let you strongly type your external code sources (FSharp.Data.SqlClient is just amazing - statically typed SQL directly in your code!). Unions/Discriminated Unions, units, etc.
Totally agree, but I remember the (childish) excitement of using the new, shiny, "cool" stuff. And then the satisfaction of snorting at the soon-to-be-released .NET Framework. And not to talk about that Java-rip-off of a language that would come with it!!! Oh those were the days... I am a C#-developer today, but don't tell anyone! ;)
It depends on if you like the boring stack, and also what your normal stack is. If you mostly write ruby, Sinatra or Rails should be your first port of call: they're widely used frameworks in your language.
But no, don't go for the zeitgeist for no reason: zeitgiest is tautologically new, and new stuff tends to break a lot.
I play with all sorts of programming languages, specially on those long winter nights.
When it comes to production code it is all about the Java and .NET stacks, with some C++ only when there is a need to integrate some sort of native library, or call the respective VM APIs.
Even then I can usually sort it out with JNA, P/Invoke and RCW.
Fully agree. We're moving our work codebase from Clojure to Java. It's true that Java is boring and old and simple. That's what makes it great! Now we can focus 100% of our efforts on the business logic rules. (Granted, sometimes I wish we were using C# instead. But only sometimes.)
If you want C# but on the JVM, you're gonna have a better time with Scala than you are with Java. It's a 12 year old language, but yet still exciting in some respects, and it covers pretty much a full superset of C#'s functionality (a handful of uglier hacks for reflection like ClassTags, but can still do it).
I'm sure many will disagree with me on this but I'm convinced that Scala is the Perl of the JVM world. It's a language with too many succinct but unreadable ways of doing things. Ultimately I don't think languages like this age well and will eventually be super-ceded by other languages.
Clojure is real boring stack. No abstract singleton proxy factories, just plain scripting. I have web application in production using plain Ring API, without frameworks. Like good old days of CGI and PHP but more sane.
I would love to agree with that, but I don't think we can. I've been writing Java (well, more recently Scala) for 14+ years, and I still see otherwise great programmers succumb to Java class-itis.
I see a lot of bad C# code, but I don't feel the need to make snide comments about C# whenever I see or hear it mentioned. It's pretty clear to me it's not the languages fault.
There's nothing in Java forcing you to make deep class hierarchies or leaky abstractions.
Nothing that forces you, sure. But when you have a culture following that language for so long that encourages that, and have existing 3rd-party libraries and an ecosystem that makes it easier to do that... well, that's what happens.
Not sure about the GP of my original comment, but I'm not trying to be snide; I just see it everywhere, and it really frustrates me. I don't see it with Scala/C++/python/ruby programmers, so what's the common denominator? The language.
I even see such constructs in Ruby periodically, including discussions that it's proper engineering that all serious people do. And Angular 1.x is famous example of this in JS.
I used Play framework that especially opposes these practices in Java but I felt like language resists such riot attempts.
Having used Clojure for a production environment, I really would have a hard time recommending it for anything but the most trivial application.
It's just not mature enough for anything that is going to be worked on by more than a small team and alive for more than a couple years, both in ecosystem & language maturity.
With enough effort and stubbornness, you can build any system in any turing complete language.
Whenever people point out massive projects done by Fortune 500s, I do recall this and think "yes, Boing can do that, they also used Ada before for the same thing", and "yes Google can mesh 5 different languages within the same company, 2 of them invented there". Does that mean my company can do that?
No. We'd burn our capital stake long before we get to the point of seeing returns versus taking the 'boring' well trodden route that everyone is already experienced in.
I'd say more "effort and stubbornness" is required to get a Java/C# app out the door in decent time compared with Clojure. There was a study not long ago in which Clojure won hands down for least number of bugs and lines of code. I think Rich Hickey got a few things right so don't write-off Clojure too hastily. There's also Clojure Spec round the corner so you'll be able to have your types and eat it too. Win, win surely, no?
Writing Java is painful, especially if you know Clojure ;), I would say that by now Clojure really is a mature tech, but yes, the ecosystem is lacking and very small compared to Java, Python or JS.
"Boring" meaning "a lack of surprise", sure. "Boring" as an antonym for "what the cool kids like", no. Plenty of unfashionable technologies are that way for reasons.
There were times when .NET was the new and shiny (it arguably still is with the recent open-sourcing and stuff going on in F#). I wonder if it was back then when the author started to use it :)
Nice. I remember when I was begging my manager to switch from our classic ASP/JScript/MS SQL/Java Applet stack to .NET and he refused saying that it was not mature enough. Good Times.
This is a refreshing take, although maybe extended a bit too far. It seems like this kind of thinking is mostly in the real of software on the web and not necessarily on the systems side of things.
It's been open source for several years now. It's more open than java, android and many high profile OS projects. They've also decoupled intellisense from the IDE, meaning any text editor like vim can get the same features.
Low maintenance is important, but so is functionality. It's frustrating to have to fix code that was working and gets broken by some update, but it's even more frustrating to have to copy/paste code because your language doesn't have the abstraction facilities that let you factor out the commonality.
> but it's even more frustrating to have to copy/paste code because your language doesn't have the abstraction facilities that let you factor out commonality.
I find this statement to be less of a problem in many modern programming languages (or greatly exaggerated by a specific language fan when comparing his/her preferred tools vs "the other languages).
I think in higher-kinded types these days, and can't stand to use a language without them (where I necessarily end up copy-pasting). There are plenty of modern languages without that.
I've only had to copy-paste code because of Scala's lack of support for polykinded code a few times, but it was very frustrating every time. Hardly any languages, even modern ones, support that.
If you think copy&paste is an option at all, I agree. If you are to rely on somebody else's work, better make it work they have put an actual effort in getting right, for which they have some stakes.
Doing "boring stack" implies you actually know what you are doing, though.
Seconded. Sadly, our tech stack at work still includes some webforms cruft and I have a few friends at other companies that also still have to deal with them. Could be worse I guess...I know someone that still has to put up with classic ASP.
Thankfully I work with backend stuff at my job currently or I'd probably go crazy dealing with a mishmash of mvc and web forms. Webforms are not super easy to get rid of if you weren't careful on how you coupled to them, so I think some code is still being added to ours.
> Good old C#, SQL Server and a proper boring stack
And? What is the proper boring stack for targeting a browser? The reason React is so popular is because it answers a fundamental problem of application delivery to the browser client.
If you're living in C# land that's not always boring either, as soon enough you have a GC nightmare.
I agree with everything in this article in principle, but it fails to leave room for innovation, where innovation is necessary.
And an MS stack when 80% of the world deploys to Linux on the serverside. You might be stuck in the wrong decade...
If you're running a small SaaS company, it doesn't matter all that much what 80% of the world is doing.
I used to feel the same way you do. My apps were all Ruby or Java, usually with Postgres, deployed on Linux.
My past couple of employers, though, have used C# and SQL Server running on Windows Server. And they've both been able to handle heavy workloads without using a huge amount of resources.
I'd think twice about using SQL Server if I were launching a small SaaS app due to licensing costs. But the extra cost of a Windows VM wouldn't hurt much if I had more than a couple of users. And I know from personal observation that a relatively boring Windows/IIS/.NET stack would handle the workload of my SaaS app without breaking a sweat. And there's no reason you couldn't mix in React on the front end. I haven't run into any noticeable GC issues even while running heavy workloads, but I could see it being an issue depending on the type of work you do.
That's not to say you shouldn't deploy a cutting edge web stack on Linux or the BSD of your choice. You can certainly be successful that way too. Perhaps a balance is best: use boring, battle tested technology where you can, and use the cutting edge tech in places where it will give you a competitive advantage.
> If you're living in C# land that's not always boring either, as soon enough you have a GC nightmare.
You mean garbage collection?
I'd love to hear about the "nightmare" you had. The worst I've ever had to do was restart the app pool, about 15 seconds of downtime and all was well again.
> And an MS stack when 80% of the world deploys to Linux on the serverside.
The MS stuff works. Reliably. For years. Decades. And has development tools no Linux solution even comes remotely close to matching.
> You might be stuck in the wrong decade...
That's kind of exactly the point the article was making. If the tools from "the wrong decade" are solid and reliable, just keep using them. That's the point.
I like React, but it's delusional to think you can't target a browser without it. I mean, we lived without front end frameworks for a loong time before that.
1) We had those before React.js and all the modern JS frameworks
2) A lot of people don't like having "rich applications built on the clientside", because in their experience that means buggy, broken applications that never work quite right and often break the scrollbar and back button.
Having been developing quite large ASP.NET MVC applications for about 7 years on 60M+ Revenue Websites, I don't think I've ever once had a GC problem for a website.
Our average page is rendered in less than 20ms on cache misses [0] and we do over a billion page views per month with 9 web servers at 5% capacity. [1]
Sorry but it does get annoying when people suggest we "improve" our stack with $fancy_new_stack.
[0]: https://twitter.com/Nick_Craver/status/790527231600787456
[1]: http://stackexchange.com/performance