HN2new | past | comments | ask | show | jobs | submit | hnlmorg's commentslogin

As a hiring manager who’s worked at various different scales of organisation, I think the original article is a fair warning.

Headcount doesn’t take 2+ years to resolve. Even in heavily bureaucratic organisations, it’s a few months at worst.

Organisation wide restructures can take years and changes to departmental structure can be suspended while the org restructure happens, barring any unusual and typically director approved circumstances (like scoring major new project with a key client).But any employee would be well aware of such restructures and client projects.

Changes to pay will typically be postponed until the next pay review cycle. So could be up to a year. But if it’s longer then that’s typically a sign that your manager (or above) has already vetoed any such pay increase and they’re not being truthful with you about it.

Ultimately, if you get told to wait 2 years and the reasons are not “company wide restructuring” then there’s some shadow politics going on and you should definitely be reviewing your job prospects. And if there is a company wide restructure happening, then you should also be updating your CV just in case too.

If you get told to wait 3 years the just assume it’s never going to happen. Because you can guarantee even if your management has the best of intentions, priorities will shift multiple times within those 3 years.


Being told you have to wait until the next pay review cycle, is normal. It’s how a business with healthy and defined processes should operate.

But you should only be waiting at most a year. If you get told “wait 2+ years” then that’s usually a sign that they’ve already decided you’re not eligible (for whatever reasons they decide) but don’t want to be candid with you.

If you get told to wait for any duration beyond the next pay review cycle, then take that as a sign that you’re not going to progress under the current regime.


Can I get an exception of it was a terrible fiscal year?

Take the startup situation:

We were unprofitable year 1, we were break even year 2. Year 3 we look to repay investors and begin fun times. Year 4, fun times.

I suppose I can think of enough exceptions that I reject the theory the original OP posted.


There are obviously going to be exceptions. Every rules has that. Hear why I said “usually a sign” rather than “it’s a guarantee with out any exceptions”.

But to take your startup example, they generally short on base salary with the hope that you score big when the company sells / floats. Which is a very different scenario to saying “we aren’t going to pay you more because we are unprofitable”.

Also, if a regular (ie non-startup) business isn’t profitable and are then freezing wages as a result, then that’s another good indicator to update your CV. You might be lucky to get a decent severance package, but even if you do, you’ll still want that CV updated.

So my advice stands.


> It’s how a business with healthy and defined processes should operate.

That sounds like bullshit. Why?


Because it shows the company has processes in place for economic planning, with budget allocation, and all the other systems and checks that are meant to ensure stability and profitability.

That’s not to say that businesses with these processes defined can’t still be total shitshows. But the ones that don’t have those processes are more likely to be shitshows.


Not all of those systems will be running from the same hardware controllers.

That’s a very specific gripe to make. So specific that you have to acknowledge it’s not going to be a deal breaker for everyone. Which makes me wonder why you’d use the “Stockholm Syndrome” argument — assuming you used it in good faith and not just because you wanted to sound edgy (or some approximate synonym of)

It's a thing that confuses every single person the first time they touch a terminal! Could do without the diction-based ad hominem.

There’s no reason why you can’t have both.

Well behaved CLI tools have for years already been changing their UX depending on whether STDOUT is a TTY or a pipe.


Is this getting traction? The front page of HN and some meta-debate is a pretty low bar for what I’d consider traction if I were a one of the richest companies on Earth.

> Microsoft is Windows. Anyone saying otherwise is completely delusional.

What's delusional is making an unsubstantiated claims and then dismissing any counterarguments before they're made.

> Most of M$ office software has alternatives (Google Docs, OpenOffice...)

True. Yet MS Office is still the de facto standard.

> Github is constantly crashing and burning

True. But that doesn't mean it isn't still a business strategy for MS.

> Azure is garbage

Also true. But that doesn't mean it isn't profitable: "Microsoft Cloud revenue increased 23% to $168.9 billion."

> and they uttery killed Xbox

Quite the opposite. Xbox is thriving: "Xbox content and services revenue increased 16%."

> Oh and Linkedin is for actual psychopaths.

That's subjective. And even if it were true, that's got nothing to do with profitability (eg look at Facebook).

> If Windows dies, all of their other junk that is attached to the platform will die as well.

First off, literally no-one is claiming Windows is going to "die".

Secondly, even if it were to "die", you've provided no evidence why their other revenue streams wouldn't succeed when it's already been demonstrated that those revenue streams are growing, and in some cases, have already overtaken Windows.


I know devs are a different market, but how many folks do we know daily drive Mac/Linux and use MS dev tools? VS Code, Typescript, .NET?

I think they'll do just fine if Windows dies on the vine. They'll keep selling all the same software; even for PC gaming they already have their titles on Steam.


There already is a term: Quake Source Port

https://quake.fandom.com/wiki/Source_port


Except the reason for Discords initial success, and the literal only reason I have it installed, is for voice chat when playing certain online games.

I love IRC, I even wrote my own IRC client in the 90s, but it’s clearly not going to be suitable for gaming in this context.


IRC for main chat, Mumble for voice chat when gaming. Been solid for decades. I have at least 3 functional Mumble servers saved (including my own) in my client, most of them are associated with an IRC community. I occasionally hear "Anyone down for some Quake? Hop on Mumble." or something to that effect. Mumble is pretty easy to host, so if you're using it with a small to medium group of friends, I'd say just throw up a server on your LAN somewhere. It's got decent mobile clients on F-Droid as well if you need one.

Not all gamers are techies though.

Some of my gaming buddies on Discord needed help getting that properly working. Asking them to set up and use both IRC and Mumble would be a step too far.

This is a common trap HN falls into. Stuff that’s easy and practical for people of our capabilities can be a nightmarish hellscape for other people.


I’ve got 8 year old Go code that still compiles fine on the latest Go compiler.

Go has its warts but backwards compatibility isn’t one of them. The language is almost as durable as Perl.


8 years is not that long, if it can still compile in say 20 years then sure but 8 years in this industry isn't that long at all (unless you're into self flagellation by working on the web).

Except 8 years is impressive by modern standards. These days, most popular ecosystems have breaking changes that would cause even just 2-year-old code bases to fail to compile. It's shit and I hate it. But that's one of the reasons I favour Go and Perl -- I know my code will continue to compile with very little maintenance years later.

Plus 8 years was just an example, not the furthest back Go will support. I've just pulled a project I'd written against Go 1.0 (the literal first release of Golang). It's 16 years old now, uses C interop too (so not a trivial Go program), and I've not touched the code in the years since. It compiled without any issues.

Go is one of the very few programming languages that has an official backwards compatibility guarantee. This does lead to some issues of its own (eg some implementations of new features have been somewhat less elegant because the Go team favoured an approach that didn't introduce changes to the existing syntax).


8 years is only "not that long" because we have gotten better at compatibility.

How many similar programs written in 1999 compiled without issue in 2007? The dependency and tooling environment is as robust as it's ever been.


> because we have gotten better at compatibility.

Have we though? I feel the opposite it true. These days developers expect users of their modules and frameworks will be regularly updating those dependencies and doing so dynamically from the web.

While this is true for active code bases. You can quickly find stable but unmaintained code will eventually rot as its dependencies deprecate.

There aren't many languages out there where their wider ecosystem thinks about API-stability in terms of years.


If they change the syntax sure but you can always use today's compiler if necessary. I generally find the go binaries to have even fewer external dependencies than a C/Cpp project.

On the scale of decades that's an incorrect assumption, unless you mean running the compiler within an emulated system.

It depends on your threat model. Mine includes the compiler vendors abandoning the project and me needing to make my own implementation. Obviously unlikely, and someone else would likely step in for all the major languages, but I'm not convinced Go adds enough over C to give away that control.

As long as I have a stack of esp32s and a working C compiler, no one can take away my ability to make useful programs, including maintaining the compiler itself.


For embedded that probably works. For large C programs you're going to be just as stuck as you are with Go.

I think relatively few programs need to be large. Most complexity in software today comes from scale, which usually results in an inferior UX. Take Google drive for example. Very complicated to build a system like that, but most people would be better served by a WebDAV server hosted by a local company. You'd get way better latency and file transfer speeds, and the company could use off the shelf OSS, or write their own.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: