This article highlights one of the things that really bugs me about tech hiring. The availability of developers who know a language or tech isn't a limiting factor if you're willing to hire people and give them the time to learn. Yes, it's expensive, but if there really are productivity benefits from using a particular tool then you should end up better off in the long term anyway.
The expectation that another company will train your workforce, or that they'll train themselves in their spare time before joining, massively inhibits the proliferation of different, new, and interesting technologies and keeps 90% of the devs in the web and mobile world on the same 3 platforms (JS/TS, Swift, and Kotlin).
Unless companies start hiring to mentor people this will not change, and probably TS will just eat all the others in the long run.
> Unless companies start hiring to mentor people this will not change
This wont change until SWE stop job-hopping so much. In turn, the job-hopping won't stop until employers stop prioritizing new hire compensation over existing employees...and I don't see how this stops because it's measurably profitable and is self-reinforcing.
On the flip side, as an employee, I like that I can reuse the skills I have, it commoditizes employers and lowers the friction in changing jobs. Knowing what I know about capitalism in practice, employers would love the savings from employee tech lock-in.
Counterpoint, I made a recent shift to Android from many many years of iOS development. It's not the language that I struggled with, it was the completely different engineering culture surrounding the platform. Yes there are many similarities, things like concerns for battery life and thoughtful network usage. However, that's where the similarities end. The APIs and appropriate architecture decisions are completely different, so I needed to build all new instincts.
Then, you have the tooling that are completely different. Where it would normally take me 5 min to jump into Xcode to profile an application, inspect for leaks or other performance issues, I spent a few hours going through Android documentation learning how to do the same thing.
Last, and this one is a bit nuanced and sort of hard to explain. As a senior engineer, you're typically expected to help mentor others and maintain the foundations of your product so that as new people join they have a template to work from. If you're switching to something so different, you could end up hurting the team (or company) by using the wrong patterns and promoting incompatible ideas. People need to trust you and if you break that trust at the beginning of your employment, it'll be really hard to earn it back.
No regrets though, and I'm grateful to my employer for giving me the space to make the shift. However, I do not believe in the idea you can hire someone at a generalist level and then expect them to be productive anytime soon.
There is a saying, "Jack of All Trades, Master of None," and although very cynical, it's just an honest statement when it comes to these things. Hire the right person for the role you need filled first, then if you want to make room for engineers who are looking for big career changes, make sure your product and team won't suffer for that and you have the right culture in place to encourage that kind of growth.
Being active as a generalist with some deep knowlege across specific stacks I can pretty much vouch this, even on stacks one is conftorable with takes some time getting back to after a couple of months let alone stacks that one never used.
I think the key point from the OP you're missing here is mentorship/training to avoid all of the issues you described. I don't think the OP was advocating for hiring the best generalist you can find and then just throwing them to the wolves and expecting senior-level productivity.
Developers also switch jobs to get more salary and market themselves as specialists in a specific stack.
I yet have to see developer taking pay cut to retrain in a different tech.
Companies already have to factor in training of people on their company specific stuff. That is why no one expects new hire to be fully productive in first 3-6 months. Even if you know JavaScript it still is quite a lot of work to understand code base ... now take someone who does not know JavaScript and as addition has to learn company specific stuff.
That is why hiring is like that because "tech" is transferrable skill between companies and developers like to invest into transferrable skills.
Learning JavaScript is such a small cost for an experienced developer who has done any kind of similar work in any language, relative to the 3-6 months of company level ramp up you propose, as to be essentially meaningless.
If you need a JS expert because you currently don’t have anyone who can figure out performance problems with junior engineer’s changes, that’s one thing, but you then probably can train that JS expert in any JS framework very quickly. Likewise if you need a UX expert, that UX expert should be able to pick up Swift or whatever language you need them to work in faster than it takes them to grok the business problems you’re actually solving.
"I yet have to see developer taking pay cut to retrain in a different tech."
I'd argue that most tech just does not take that long to train in. If I am proficient developer that knows Postgres, how long is it really going to take me to be productive with Mongo?
This is very different compared to say a web developer moving to robotics. I would take a pay cut for that because the knowledge gap is so wide.
> The availability of developers who know a language or tech isn't a limiting factor if you're willing to hire people and give them the time to learn.
It is a job satisfaction risk though. Just because a person is capable of learning a language or technology, it doesn’t mean that once they have learnt it they will find satisfaction in using it every day. Surely most people with a decent amount of experience has at least one language or technology they dislike?
Only hiring people who have experience with the language or technology doesn’t just mean that they can be productive much sooner, it also means that you are selecting only the people who know they want to work with it every day whilst filtering out people who would end up disliking it.
it also means that you are selecting only the people who know they want to work with it every day whilst filtering out people who would end up disliking it
Doesn't that imply that we all like the languages we use every day? I'm fairly sure that's not the case for a lot of developers. In fact, I'm certain loads would like the opportunity to use what they think is a 'better' language, but they can't because too few companies hire for it.
If somebody is working with a language and applies for another job working with that language, then you have high confidence that they aren’t going to suddenly realise they dislike the language and therefore the job.
If somebody hasn't worked with the language before, that confidence is not there at all.
> I'm certain loads would like the opportunity to use what they think is a 'better' language
Sure, but expectations and reality are not always in agreement. The grass is always greener on the other side, right? Those people might have a different opinion after working with the “better” language for a while.
> Doesn't that imply that we all like the languages we use every day?
Yes and no. I tolerate writing in Java, but for most applications I'm not a fan. I think what GP is alluding to is that developer ergonomics matter more than we give them mindshare for.
To be fair, if you exclude the ones that are garbage, you basically have React, Vue, and Angular. Of those three, React is the clear winner in terms of market penetration, but there are certain things in both Vue and Angular that blow React out of the water.
You will spend year teaching someone then he leaves for another company who gave him 20% rase. You're net negative, they're net positive. You'll lose the race in the end.
This could be a good approach for someone huge like Google or government. Asking that from an ordinary company will not work.
> You will spend year teaching someone then he leaves for another company who gave him 20% rase.
This only makes sense if you hired the person at 20% below their market rate under the belief that they were less valuable as a hire simply because they didn't have a specific skillset you needed, and then once they acquired that skillset you failed to adjust their salary accordingly.
But the entire thesis of the OPs comment is that we shouldn't be valuing staff based on any one specific technical skillset, but rather based on their broader software development skills and experience, because learning the ins and outs of any specific software stack is not itself that difficult given enough experience.
Solution: hire and retain at a competitive market rate. Problem solved.
I think the assumption behind the post you're replying to is that this will happen regardless of what you are paying them, that is, there is no actual stable "market rate" that can be precisely targeted.
It may seem this cannot happen, but in a system experiencing rapid inflation it's what you'd expect to see. This doesn't have to be on a macro level. The endless tide of VC + crypto funds have created a form of (hyper?) inflation inside the tech industry in which wages are spiralling upwards without any fundamental link to increased value. It's simply a bidding war in which those who fight for closer access to the flow of inbound money can always outbid other firms, regardless of how much they pay, because there is no "market rate", just "whatever this guy currently makes + 20%".
> I think the assumption behind the post you're replying to is that this will happen regardless of what you are paying them, that is, there is no actual stable "market rate" that can be precisely targeted.
> The endless tide of VC + crypto funds have created a form of (hyper?) inflation inside the tech industry in which wages are spiralling upwards without any fundamental link to increased value. It's simply a bidding war in which those who fight for closer access to the flow of inbound money can always outbid other firms, regardless of how much they pay, because there is no "market rate", just "whatever this guy currently makes + 20%". .
Of course there's a market rate, it's just increasing. That's called supply and demand. And it would happen regardless of any one specific technical skillset.
If anything this just further emphasizes the OPs point as specifically because demand so significantly outstrips supply, companies should be willing to train someone up rather than forego a promising hire, as that increases their potential labour pool (this is basically the labour market version of a substitute good). For the same reason companies are looking at partial- or full-remote hires, launching offices in other markets, etc, as that gets them access to a broader labour market outside the SV bubble.
Frankly, I don't know what you even mean by the claim that there's no "fundamental link to increased value". There is a demand for labour. There is a clearing price in the market for that labour. That clearing price is going up due to a high demand for that labour relative to the supply. Buyers are willing to pay for labour at the clearing price based on their assessment of cost-vs-benefit.
You may not think that labour is being put to productive use. But that doesn't imply the labour market is in any way flawed or that the laws of supply and demand have somehow broken down.
"Buyers are willing to pay for labour at the clearing price based on their assessment of cost-vs-benefit."
If you're competing against firms like Google or startups that just raised unicorn funding, and these days everyone is, then the people bidding in the market or labor are not doing cost benefit analysis. They just need to hire. They have the money, they need to hire and they're going to do it no matter what the price ends up being because that's just what they do. Whether the hires actually generate more value than they're costing is only rarely assessed because the money won't dry up anytime soon.
Additionally, market rates are an abstraction. Consider a simplified example where you have 3 people who need to buy a chicken. They're all starving so they need the chicken or else they die. First person bids $1. Second person says, I'll pay $2. Third person says no wait, wait, I'll pay $3. First person says, I'll bid $4.
This loop ends when only one of the people can still pay and everyone else starves. It does not end when the price has reached some a-priori knowable "market rate" that you could go out and measure ahead of time. To the people actually in the market, they have limited information and don't know what the bidding limit is for the other players, and thus to them it can appear that regardless of what price they bid, they are always beaten by 20%.
In a properly functioning liquid market in which nobody has access to printed money, this cycle is supposed to reach an equilibrium fairly quickly with supply catching up to counterbalance demand. In the actual software market there's been a fairly direct flow from quantitative easing and crypto directly into employers, as well as the yearly 20% revenue bump Google always seems to achieve, so some participants have effectively "unlimited" money. It doesn't make sense to talk about a clearing price in that situation. From the perspective of any normal business that gets money from customers, they will always lose.
> If you're competing against firms like Google or startups that just raised unicorn funding, and these days everyone is, then the people bidding in the market or labor are not doing cost benefit analysis. They just need to hire.
That is a cost-benefit analysis.
They've received funding, they have a business model, now they need labour.
It seems to me your real gripe is that there's a large amount of investment flowing into Silicon Valley and you don't believe that should be happening.
> This loop ends when only one of the people can still pay and everyone else starves.
That's right. And that means, in the valley in particular, some companies will never get off the ground because they can't afford the price for labour.
The solution is simple: hire in a different labour market.
> To the people actually in the market, they have limited information and don't know what the bidding limit is for the other players, and thus to them it can appear that regardless of what price they bid, they are always beaten by 20%.
Sure, that's called incomplete information, and it's a feature of virtually all markets, minus open and instantly traded markets like stock exchanges.
I'm not sure what point you're trying to make, here, other than to complain about a general feature of capitalism that (in a refreshing change of pace) happens to be working in favour of labour.
> In a properly functioning liquid market in which nobody has access to printed money
And now we get to the political axe you're grinding.
Look, the SV labour market has been explosively expensive for 20-30 years. This isn't a new phenomenon by any stretch, and it certainly pre-dates the 2008 financial crisis and the low interest rate environment that followed. There's a reason outsourcing became so popular throughout the early 2000s.
The problem is that tech companies failed to expand their labour pool beyond a few tight markets, either by opening offices in other locations or supporting remote workers. That's their choice, but don't complain when labour prices go up as a consequence.
My point is that you're over-simplifying. You've given two "simple" solutions neither of which are actually responses to the post you were replying to by vbezhenar:
1. "Hire/retain at market rate". Easy unless you don't know what the market rate is, which you don't, because - as you say - incomplete information is an inherent feature of any resource allocation system including markets. Which is probably why vbezhenar phrased their claim in relative terms instead of absolute terms, whilst caveating that this is true for an "ordinary company" but maybe not for Google or governments. Theirs is an inside-the-market view. You're arguing that all you have to do to solve this problem is somehow step outside the market, and just know what the final price actually is for any given person.
2. "Hire in a different market". But that isn't an answer to the OP's point, it's an agreement with it. He's explaining why ordinary companies will refuse to train people. You're claiming the "solution" is to get out of Dodge i.e. not train people. It's not a refutation.
There's no political axe grinding here. What's happening is not unique to SV tech firms. It's an inevitable consequence of CB monetary policy distorting classical market economics. The causes of the policy may ultimately be political, but the outcomes are inevitable.
> But the entire thesis of the OPs comment is that we shouldn't be valuing staff based on any one specific technical skillset, but rather based on their broader software development skills and experience, because learning the ins and outs of any specific software stack is not itself that difficult given enough experience.
Exactly. Tech stacks are extremely transferrable, at least to an experienced developer.
Imagine hiring a painter. You want a green wall. You don't rule them out, or pay them 20% less, because the last wall they painted was pink.
> and then once they acquired that skillset you failed to adjust their salary accordingly.
Or your budget took a hit due to half year gap in delivery, caused by boot camp you conducted and then bigger company snatches your team, while you have no way to react.
>If you need to slip or stop development for a half year because you need to train up a whole new team, you have a whole other problem.
Yes, which is why nobody reasonable does that. Which leads us to the original point, that many companies will not be interested in taking risk to introduce flutter.
Isn't the corollary that there are other companies who train up people who you can hire for a 20% raise from them? Or even just give your existing staff a pay rise once they reach a more senior level?
If you can't afford to do that, then that's saying you can't afford to hire people who know a lot about the technology you're using, and then that's a whole different issue.
I think a lot of companies are figuring out, right now, that it isn't all about money (and crappy snacks) that equal employee retainment. Mentoring and proper training in decent platforms, not to mention a decent work/life balance go a long way when you start comparing them to different company cultures.
I might grind away at Amazon for a couple of years (personally - no offense to the people that love it, it isn't for me), but that sort of job first, life second - everyone move up metric isn't for everyone. Some people are "professionals" and some people are "craftsmen"... I guess? Neither one is good or bad, they're just different mindsets and goals.
There are companies (though few, I suppose) that want "craftsmen". There are companies that want "professionals". A good company, wants both.
>"You will spend year teaching someone then he leaves for another company who gave him 20% rase."
If you hire decent developer and it takes a year to teach them for example react to be productive the problem is not with the developer. Normal developer should be reasonable productive in a matter of days. Ok give it a month. And it does not require tutor hanging over the shoulder.
The problem is with you as you've just hired someone who's at the moment unqualified as a developer in general.
IDK what your experience is, but in my experience at Google it would take about 6 months before I could consider someone "reasonably productive". Or maybe it takes about 6 months before I'm confident that they are actually underperforming.
A week is just enough time to get a cursory understanding of the product and tooling and fix a newbie bug.
Google is not at all a reasonable proxy for the rest of the industry. It has an almost entirely parallel infrastructure that bears little resemblence to what everyone else learns and uses. Google has notoriously high training times for this reason and always did.
I realize that, but 1 month vs 6 months? It's not just that the infrastructure is all different, ramping up on a codebase is extremely time consuming itself. You're going to have that at any other large tech company.
If it's a tiny startup on mostly greenfield stuff then sure, but that's also not a reasonable proxy for the rest of the industry.
Trust me, Google is just weird. I used to work there. When I joined SRE it took 3 months for me to be able to do anything useful at all, and another three months after that before I was really able to function on my own without needing to constantly look things up or ask for help.
After Google I joined a startup where I built two new products from scratch, including hiring most of the team myself. On boarding time was about a week and that included learning a new programming language from scratch (Kotlin, which is in fairness, easy to learn). This remained true true even after several years where the codebase had become quite large. On-boarding time never really changed for programmers of equivalent skill.
Ramp-up time in most normal businesses and most normal codebases is measured in weeks, not six months.
>"In most cases that I’ve seen, it’s not React that’s difficult to ramp up on, it’s the overly complicated codebase."
Well the developers you want to hire would know zilch about that codebase so you have to train them anyways. And after you do they will not leave because for greener pasture with that new knowledge since nobody else is using that codebase internally.
I think Flutter is better _because_ it uses Dart. I used to hate that decision ("Not another language to learn.."), but Flutter and Dart are like Mac and macOS (I'm aware people complaining about macOS quality going down the hill.. but that's another story). They're both from the same company, seemingly built for each other, and the language (Dart) improvements are _relevant_ for the platform (Flutter).
Flutter still has its quirks, but hands down, it's the best platform for the majority of "standard" apps. Great developer experience, acceptable performance and size.
Also, about hiring.. from my experience, it's one of my selling points in recruiting mobile developers. Many mobile developers want to learn Flutter, so I easily get dozens of candidates who know Flutter only from their hobby projects but want to get involved in real projects. But, of course, you still need some senior Flutter guy.
My only complaint is since the platform is expanding, things break too easily. You _can_ lock dependencies, but you'll feel guilty.
I don't want to get down too much into the details because much of language design is subjective. I found it very easy to work with because it has a lot of the edges sanded off, relative to JavaScript. The developer experience is pretty good.
One of the things that always surprises me about Dart is that Google continued to invest in it for so long without any real returns until fairly recently. Dart came out in 2012 and to my knowledge had basically no usage for several years. Flutter came out in 2017, and then it took a decent amount of time to start picking up steam. Very uncharacteristic of Google to invest in something like Dart for 7+ years before it really started seeing any returns on the investment.
There are various theories along the lines of "principal engineer retention strategy", where part of the reason for these projects (and I have to emphasize "part" in "part of the reason") is because they are the pet projects of some particular individual at Google who can deliver tons of value to the company other ways, and who might leave if they don't get to work on their pet projects.
I have definitely heard stories of people at companies like Google and Facebook who have done things in their career that saved or made the company enormous amounts of money in one fell swoop, people who can help get massive projects back on track or solve thorny resource usage problems. The theory is that if they've done it a couple times in the past, you do what it takes to keep them around because they'll probably do it again in the future.
These people generally get positions crafted for them, and sometimes teams.
As someone who started programming using GCC, Vi and commandline tools, I just hate the tooling around Flutter.
And yes, someone will probably point out that I can use Flutter with commandline tools after I disable all the bloatware that needs to be installed anyway, but that's missing the point.
How is pointing out that you don't need the tools you hate and you can use the same kind of tools in the CLI as you would with any other framework missing the point?
You only need dart and flutter itself, and both of those can used via CLI just fine, I know because when I develop flutter I exclusively use my classic tmux+vim setup to do so.
If I want to create an android build I naturally also need the android SDK, but that is even packaged for Debian or Arch Linux, and even if I use the newest one directly from Google its just a one time setup, and that is completely unrelated to flutter (which I can test via the native or web build too) but 100% related with the build target (android in this case), so really not seeing your point.
I don't understand: I am using Flutter just fine from vim and I have never even tried their IDEs. What do you mean by "disable"? What "bloatware" were you forced to install in the first place, and how is it running?
>Flutter still has its quirks, but hands down, it's the best platform for the majority of "standard" apps
How is the market for "standard" apps doing? One would think that by now "UI for a Database/API" should have died out already? Even if companies do need an UI for a DB or an API, that job must have had become really simple with all the frameworks and libraries.
On Twitter I keep seeing people building the millionth water tracking app, so I guess there's still market for everything but I feel like "Coding regular UI with a login form and stuff" kind of jobs must be vanishing, aren't they? Surely design tools must be able to extract functioning UI's or a bit of coding know how should be able to enable anyone put together a decent UI using components libraries?
"How is the market for "standard" apps doing? One would think that by now "UI for a Database/API" should have died out already? Even if companies do need an UI for a DB or an API, that job must have had become really simple with all the frameworks and libraries."
From my casual observation the learning curve for "UI for a Database/API" is becoming steeper every year. Such software reached its high point with VB6 or MS Access but since then simple tools have been dying and replaced with ever more complex frameworks. You still can't easily develop a web app that's as powerful and simple as you could do on the desktop with VB6 or MS Access.
I completely agree. It's amazing to me that MS Access is still probably the best rapid application development tool ever made. There's just nothing like it that works for the web.
In my experience there's always _something_ that isn't well supported by design tools. Some validation rule, some odd required relationship, a UI that isn't at all supported, and what is supported isn't at all liked by stakeholders. Half the app is legacy and still being migrated and this new widget depends on XX legacy component, etc.. I've personally observed 3 apps start as a GUI-based application, but by the time it shipped v1, and definitely v2, most of the app was hand-rolled.
Also in my experience, basically every company out there needs or wants an app. A web app, a phone app, a TV app, a tablet app.. internal apps, external apps, apps to gap 3rd party app integrations, they're fucking everywhere. A lot of these 'apps' are just web forms over CRUD data for one reason for another, but it's custom for some reason. And the reasons for why a GUI builder gets phased out also applies to why app-farm apps also get phased out of a company. Eventually, with enough success, having an internal dev team is just so much smoother than contracting or bridging the gap with excel spreadsheets. Someone has to wire things together, and eventually they get to fixing the root cause of inconsistency and a custom app is born.
I think we will have programming jobs for basic forms for a lot longer than any one of us would suspect, for a lot of reasons. But my last and most important thought is, once something becomes easy for anyone to do it becomes common. To stand out you need to be uncommon in some way, so there will always be space for people who can do what most others cannot.
Established companies have established company problems, complex flows that need to account for things like legacy deep links, A/B testing, UIs that go off the beaten path for brand identity, etc.
It all adds up to very little of a common denominator for all of these apps.
I've worked on some of the top apps in the play store for their respective categories, and if you saw their codebases your first thought would be "I could recreate this app in 1/10th this much code!".
After actually talking to all the stakeholders involved your second thought would be "I'm amazed this didn't take 10x the code!"
Oh sure, all good points. However in such cases React vs Flutter is not the part of the discussion at all because they tend to have so much investment that re-do is almost not an option. Bespoke tools, integrations, accommodations for odd behaviours or bugs etc. are huge costs. So many horror stories of failed re-do's.
You would think so but no, in fact, such apps have been getting more numerous yet harder to make over time rather than what you'd expect.
Causes:
1. Web platform is very unproductive and hard to learn compared to what was available in the 90s. There's no widely accepted, robust and modern equivalent for Access, VB, etc. These tools were designed to be easy to learn. Smart business analysts could throw something together. It'd be a mess but you could graduate it into a "real" app, or something approximating that, without needing a rewrite. This culture is gone. No, the web is definitely not as easy. Think about how complicated just binding a DB table to a paging table view is. HTML doesn't do this, SQL queries can't even be serialized natively, so you end up needing a custom web server, a backend framework, a frontend framework, extra JS libs or widgets to give paging or scroll or fast updating search. Consider that with Access or FoxPro it could be almost all GUI driven.
2. Although a few complex requirements got simplified out by the march of tech (e.g. offline access, supporting downlevel browsers), mostly, enterprise requirements became more complex. Some of this is reasonable and legitimate, like integrating with SSO systems, mobile versions, better auditing, more beautiful UIs and elimination of scheduled maintenance periods. Some of it is of questionable legitimacy. A lot of CRUD apps become over-engineered because of CV-driven development. Does your internal app for the business really need to run on AWS Lambda for scaling reasons? No. It doesn't, because the traffic levels are predictable a long way into the future and a single dedicated machine can do what you need. Will you be able to find a developer who will actually admit that and throw together a simple Spring Boot or PHP app, instead of trying to Web Scale™ it up the wazoo? Maaaaybe.
3. Platform churn. Businesses don't like rewriting apps but from time to time they have to, because either they can't find anyone who knows the old tech anymore e.g. COBOL, or the platform goes out of support. So the same stuff gets rewritten again and again without underlying change in business requirements. Worse, these projects often fail and may need to be attempted multiple times.
4. Many of these "standard" apps are astoundingly complicated. "Regular UI" can involve complex and frequently changing UI designed to navigate large datasets. Is Facebook a "regular UI with a login form"? Well yeah but it's still a lot of work to build. Think about the complexity of Bloomberg terminals for example.
5. The idea that all business apps were written by 2000 already is false. Even today there are a shocking number of business processes that have little or no IT automation; they're still based on physical paper. I didn't believe it myself until I worked in the enterprise space for a while and saw it with my own eyes - there is still enormous business value that can be delivered from writing new "standard" apps. And of course, even once you get beyond paper the long tail of business processes that are automated using an unholy and unstable mix of nightmarish Excel macros, PDFs and executive assistants is more or less unlimited.
I think programmer salaries are partly being squeezed upwards by the fact that we've made the tech stack so difficult to learn. Have you ever watched someone try to learn programming from scratch, like at a bootcamp? I have. It's excruciating. I watched as they tried to teach someone who'd never coded before how to write a "standard" CRUD web app, using Ruby on Rails. Total failure. They were not even remotely in reach of the goal. To succeed with even a basic app they needed to learn about Ruby, SQL, HTML, HTTP, ORMs, JavaScript, JSON, CSS, and of course the UNIX fucking shell because even if you pay $$$$ to Heroku to simplify deployment, it's all driven by a CLI anyway!
Back in the 90s one reason Microsoft won was that they managed to hide the complexity of their underlying platform with beginner friendly languages and tools. When Windows programming starting sliding into irrelevance and they dropped VB to try and compete with Java, we made the on-ramp way steeper.
Yeah, there's also https://www.jmix.io/tools/ which I've looked at before and seems pretty nice. It's basically a set of libs and IntelliJ plugins on top of Spring, so you can quickly knock up a CRUD app but then extend it later with Java.
If I needed to quickly throw together a business CRUD app then I'd be tempted to buy it. At $1000/yr/developer, that's about the cost of 2 weeks of work (assuming a lowballed $90k/yr salary). Can it save two weeks per year? My guess is yes, especially if it lets you hire less experienced devs who maybe haven't written a CRUD web app before or would find it slow going / be likely to make errors. I dunno if that pricing level puts it in the Fortune 500 universe. It probably makes sense for freelancers and internal corp devs too.
On the other hand, it is like using Flutter, JVM, iOS/macOS, Web browser without additional tooling, IDE plugins, FFI debugging issues or having to add layers of idiomatic libraries.
I've been using react native heavily for the past 18 months.
Because there's no JIT, Javascript is slow on mobile, especially on Android. I spend a lot of my creative energy trying to optimize the render function. I often times wonder if I should have chosen Flutter instead.
Each framework has its bullshit you don't discover until it's too late. So, I'm sure if I had chosen Flutter I'd be complaining about something else.. but boy am I tired of stuffing things into useMemo, useCallback, and stressing about the identity of things in dependency arrays.
There's some exciting stuff coming up this year with React/React Native: Hermes, React Native "New architecture", react concurrency mode (although I still don't 100% understand what this is). I hope one of these things improves the current situation.
My team wrapped an ambitious RN project over the last 6 months and had no such issues. We actually migrated from "vanilla" RN to the latest Expo release as EAS allows you to use native code and do nearly anything you could do with vanilla.
Every codebase is unique, but I would (and have) recommended this stack to multiple teams as it served us really well.
We onboarded 3 React devs - 2 of which had never used RN before and all 3 were pushing out features within a two week sprint.
We also got to scan a QR code from a GitHub PR and allowed other engineers, QA folk, and designers to test PRs before merging on a real physical device - I am not sure if Flutter has something similar, but the tooling around RN is amazing.
Also, LogRocket, Sentry, LaunchDarkly, Auth0 and most of the SaaS tools youre familiar with in React have solid ReactNative support - often times including web if you use RNW.
Memoizing everything might be hurting more than helping your apps performance. I’ve built a few different apps with RN, the issues here might be specific to this project
In 2019, I joined a project with other 2 partners to develop an app for audiobooks. The CTO said the app should be done using Flutter and he sold it like it was paradise on earth. I disagreed but couldn't push that much to try to convince the rest of the guys about this detail.
On month 8 on the project, the CTO resigns and leave, leaving us 8 months ahead of no major development because it was impossible to find Flutter people and, most importantly, people who were proficient on that platform. In the end we relied on an agency, paying a huge price to do upgrades.
Nowadays I am doing a RN project and the speed, results still put that over any option I could do. I would only pick Flutter once it reaches a degree of maturity in all terms like RN.
We did that with one guy after month 3, the guy accepted and started Flutter course, etc. He quit the job on month 5 arguing he disliked Flutter and Dart.
i’ve trained on flutter for over 8 months and pushed an from design to app on the app store and built the api in dart as well.
i am one of those weirdos who likes it but nobody is hiring so i am rebrushing back up on javascript and react native. one area i would love to see is json being automatically accessible just like in javascript, also their routing in flutter needs work. it’s not for everyone but it has a ton of potential if some things get change
Any idea how long it should take? I don't do frontend, but when I tried making GUIs back in the Tk heydays, it was feasible to cook up a very functional GUI in the first day of learning/usage.
Any guy with reasonable coding experience in frontend or backend can do a project in React Native in less than one month. I was CSS/HTML frontend development back in the days and it took me one month to learn JS+React Native. It's very convinient and straightfoward process. In Flutter, with all the widget, folders, environment/IDE setup, concepts. It takes longer.
I've programmed a lot in both Dart and C# and my take is that Microsoft looked at Java and decided they needed a Java-like language like so they made C#. Google was into Java but Oracle sued them over it so they decided they needed a Java-like language so made Dart.
Edit: Dart is ridiculously easy to learn because it's very familiar so if someone complains about having to learn Dart it means they don't want to learn it
Microsoft's story isn't that different from the Google one. Microsoft had a Java language (https://en.wikipedia.org/wiki/Visual_J%2B%2B). Arguably, they had a better language designer in Anders Hjelsberg, as well as a lot of component-based development experience from COM, that enabled them to build a platform that was in many ways better. Things like WebForms were relatively stunted as model for web development and would be replaced by MVC aligning to what the world was mostly doing all along.
I don't have inside information here. My reading of the narrative is that Dart was created originally as an alternative to JavaScript, because JavaScript did (and does) have just tons of problems all around. It pivoted from being primarily a JavaScript alternative to supporting use-cases like Flutter.
If you're curious, this is an informative video of a debate between the designer of TypeScript and the designer of the Dart VM about which was the better approach.
Anders Hejlsberg and Lars Bak: TypeScript, JavaScript, and Dart
> my take is that Microsoft looked at Java and decided they needed a Java-like language like so they made C#.
Nope, they already had their own version of Java, J++, which is where Windows Forms was initially born, alongside their extensions like what would become P/Invoke in C#.
J++ was going to be the main language for Ext-VOS, the project that would eventually be known as .NET.
Then the lawsuit happened, COOL was born and eventually rebranded as C#.
Quite easy to know how C# happened when reading HOPL papers about its history.
In this regard, Microsoft was less lucky with the lawsuit than Google with their Android Java.
My version is meme-able because it's short and that meme has won since I hear it oft repeated. However, I appreciate the additional detail of how Microsoft got from Java to C#
I've messed around with a few simple Flutter applications and I wrote in Java until the IDE complained then made minor fixes. I never had the feeling I was learning a new language honestly, felt more like moving from python 2 to python 3 than a whole different language
Honestly, as an ex-Googler (10 years), I have to say it's never that simple to explain why some tech is adopted or pushed at Google and it's never so top-down and unicausal.
Often it comes down to certain engineers having outsized voice and influence combined with them having a preferred hammer and then hunting around the organization looking for nails. IMHO that's what happened with Dart & Flutter.
I saw it happen in two PAs, and the second time it involved literally rewriting an entire shipped product on a platform I worked on. Slowly and tortuously, rewritten from a JS/HTML application to Dart/Flutter on "native" (i.e. not web or android or ios target), and only a few months after said product launched. To me this was classic Joel Spolsky "Things You Should Never Do, Part I" aka "rewrites considered harmful" alarm bells, but who listens to me, I'm just a grouchy old engineer.
One promise was that it would be faster (because "native"), but this was frankly based on a lot of untested assumptions and biases and was never correct. V8 and the Chromium rendering engine have (low estimate) hundreds of thousands of hu-man hours of optimization put into them. It's actually really hard to beat. Chromium has a pretty decent accelerated rendering stack, and boatloads of work went into getting it optimized for smaller SoC type devices (work often done by friends / coworkers of mine, actually).
And switching to Flutter meant having to jerry-rig (or worse, rewrite from scratch) basic things like accessibility (screen reader, magnification, etc.) or virtual keyboard that Chrome (or at least ChromeOS) had solutions for and we had just spent over a year making work for our product. Flutter delegates down to Android or iOS's implementation of these features, and as we were neither, we had no such thing. So it had to be built.
So the net result was either rewriting things that had already shipped, or, more commonly building awkward translation layers and bridges between the two platforms so that both things could run at once. a) waste of engineering hours b) source of bugs c) was happening instead of dealing with tech debt and improving or evolving the existing codebase.
The one upside was portability between Android and our product, so the same feature/app could be written for both... which I would accept as the only compelling reason to justify what was done.
I could go on, but I'd probably give away internal secrets or something, and probably piss someone influential off or burn some bridges.
My overall point being: test your assumptions and don't use some tech just because you a) like it or b) wrote it.
Also I don't really understand why Dart even exists. In the 21st century, there are very rarely serious problems to which "we need a new language" or "we need a new operating system" are the correct answer.
I do React (web, typescript) for my job, and Flutter for my hobby/side-projects. I chose Flutter specifically because Dart is far superior to javascript/typescript, and the fact that I do not have to deal with the absolutely bullshit JS build ecosystem.
Choosing React Native because you can find "Javascript developers" in 2022 is like choosing PHP in 2010 because it was popular.
If you're hiring above entry-level, you should expect a developer to be able to pick up a new framework, non-exotic language fairly quickly. And if you are hiring entry-level, you should expect to need to teach the new hire anyway.
Once in a while the Flutter/RN discussion shows up on Hackernews and I'd like to remind you all that Capacitor community is also alive and kicking.
For the past months I've been building an E-commerce with Capacitor/NextJS and its been quite a pleasant experience. If you're familiar with JS/TS and React, it's a great option for an e-commerce or a b2b SAAS app.
If you don't need many native components or a super slick performance, Capacitor might be the cheapest and quickest solution for web developers.
We are building our native app off from our web app, using Capacitor. One single codebase, 3 platforms targeted. Using very well established technologies that have a very good DX.
I did native app development on-off from about 2010-2018, with at larger apps. Today I see little reason to have 3 code bases with the same features. You got 3x the same problems. Ok, there's Flutter and React Native, but their web code transpilation isn't great. Web Views (both in browsers and in apps) are just a super major technology with excellent developer tooling such as live reloading. Capacitor would always be my first choice today.
I'm surprised Flutter's performance compares favorably to React Native here. Last time I tried Flutter (around 1 year ago) even the simplest of scrolling demos noticeably stuttered dropped frames on my phone. React Native apps work fine. Was that just my experience?
I started writing flutter apps in their RC1, and since then up till last year (since then I haven't written flutter), flutter had scrolling frame drops on both iOS and Android (high and low end). It was unrelated to device, and happened always when using the listview builder. It may have been fixed now, but it was an issue for a long while. And it also plagued web (hummingbird then).
I want to throw Ionic and Capacitor (or any other webview based toolkit with your favourite SPA framework) into this mix, I believe for a lot of apps its probably a better option than either Flutter or React Native. I also don't and believe in "one true cross platform toolkit", each has its place. So:
Hiring - The OP suggests React Native has a larger pool of developers than Flutter, very true. But Ionic/Capacitor/etc you have access to an even larger pool of developers, any proficient font end web developer.
Sharing Code, Knowledge, and Developers - Again I believe Ionic/Capacitor/etc are even better than React Native here, you can potentially reuse even more of your code/views/logic. In fact your SPA web app version is your mobile app, except you then have access to native apis with Capacitor.
Developer Experience - Dev experience with Ionic/Capacitor/etc is as good a React Native at this point. It has all the hot reload features we now expect from any SPA framework. You can even now you Vite for super speedy development.
Performance - A webview based app on hardware less than five years old is on par with any React native app. The performance of web views is not the issue, it's the underling app code, modern SPA frameworks and good engineering is all you need.
Unified UI Experience - Firstly customers don't really care about a unified UI, they have been trained after 20 years of using web apps not to care. If it works it works. Having said that Ionic have done an incredible job of replicating the Material/iOS components.
Native Integrations - Capacitor has all your usual apis covered, if you need more it's as simple as with React Native to drop down to native with Swift or Kotlin. I'm a fan of using NativeScript with Capacitor, it gives you access to all native apis straight from your JS codebase.
Internationalization - Standard SPA internationalization support, easy.
Built-In Navigation - Use whatever your SPA framework uses.
Web Support - Capacitor is the clear winner here as you have built a SPA.
Third-Party Libraries - You can use any web/javascript toolkit, along with any platform specific libraries for iOS or Android that you need.
Obviously its not the right platform for all use cases, but if you are just starting out small with a tiny team you only have to build your app once. 90% of your app is probably the front end, your don't want to be build that twice until you have the traction and teams to do it.
The last time I developed in Ionic, I stumbled into problems when actually trying to DO things in the mobile platforms: Want to use the camera? you are limited to whatever "plugin" is available. Want to read from SSD ? you also have to integrate some plugin and pray that it works for your use case.
To me, it feels that Ionic is only good for the "minimum common denominator" of apps (a webpage, maaaybe with some storage), but once you move a bit from that path, you struggle a lot with half-baked plugins and "black boxes".
Oh, and the documentation... both for Ionic (which? cordova o capacitor?) and the plugins, it is SO bad. Also, the fact that Ionic is actually a for profit endeavour, and you will pretty soon hit a wall that you only can remove by paying.
I don't think this is fair. In Capacitor/Ionic yuou have full native access and adding a bit of native code to accomplish something if there isn't a plugin available is much easier now that we use Capacitor instead of Cordova. React Native and Flutter also have the same dynamic that, if you only want to use plugins to accomplish functionality, you have to find and use the ones the community created and hope they continue to create more and maintain them. We've made big strides there with the Capacitor Community effort and also directly supporting a number of plugins ourselves.
On the "for profit" side, I see this as an advantage. We try to only charge for things that are cloud and have natural cost, or that really only enterprises need. We deliberately keep the vast majority of our platform free and open source. The fact that we generate revenue from enterprise customers is actually awesome for our projects because we are directly incentivized to improve them!
We aren't merely an advertising-driven business that creates open source for other reasons (hiring, internal use, etc), but we are literally in the business of mobile technology. If you're a big company, you can actually call us or use one of our integrations that we directly support and you can file tickets against and have an SLA, and our cloud services integrate with your existing enterprise CI/CD infrastructure. Think about Facebook ever doing this! It's a unique offering in this space that our customers really love and directly benefits our OSS work.
Finally, on our docs, there is certainly room for improvement, but we often hear the opposite: that we have some of the best docs around for OSS projects!
Same experience. If you are using ionic and using any hardware sensors including camera, be ready for a rewrite down the line. Also plugins (even commonly used ones) are full of bugs. The toolchain for builds and cross compilation is amateurish at best. Ohh and finally if all the things do work, be ready for getting surprised on how sluggish the application is going to be.
Yes, less useful. For example, apparently theres no capacitor plugin to create a "foreground service" in android. A search on Ionic's page takes you to the Cordova plugin.
There's an additional advantage, which is that you have a straightforward eject path from app stores.
If Apple or Google decides to boot you off the store for whatever reason, you're not out of business, you can distribute as a PWA and never really notice.
For many LOB apps, discoverability via the app store isn't relevant at all.
My primary app distribution mechanism is a SMS message with a link to a PWA that installs to home screen. Works everywhere, on every platform, which is a huge advantage for my senior population users.
I've developed some apps in Ionic/Cordova and Ionic/Capacitor. I agree that it's a decent system, especially for people (like me) who are just in a rush to implement something and aren't so caught up in platform wars.
I'm one of those people who couldn't care less about the language I use to code in - I just want to get stuff done. From my perspective, Javascript has some stupid stuff, but so does every other language.
My only significant objection to Ionic/Capacitor is the whole nodejs ecosystem. I hate getting dropped into dependency hell all the time. And it's significantly worse if you're stuck (as I a) supporting an end-of-lifed Cordova plugin for necessary functionality.
Now, in the real world, I'd have a team and we'd just build our own version of the plugin and get rid of the web of dependencies, but sadly I'm just one not-very-bright guy off on his own.
The few projects I was involved with Cordova and Ionic don't leave good memories regarding how slow everything was.
The only reason I got to use them was because I had no saying in what they were trying to do, just a cog on the machine.
Both applications could have been done as PWAs, but they decided to push Angular into a Webview instead, taking ages to start and react to any screen interaction.
My primary hesitation with posts like this is that it mixes engineering points (performance, developer experience, code sharing) with a more general business point: hiring.
Hiring is a cost. There are costs to any decision. Opportunity cost, risk, business continuity, operational costs, etc. Hiring is generalizable so it makes for an easy example here.
I would hope that experienced folks making decisions like "React Native or Flutter" would weigh the full gamut of costs in their specific business context.
Your first and last paragraphs seem contradictory. You Hope experienced folks weigh all costs, but a post mentioning the cost of hiring gives you hesitancy?
The fact that the post mentions hiring (cost) isn't my hesitation. It's that bringing cost into a conversation about the technical merits of a given technology can be a heavy-handed way of shutting down the conversation. In some cases, technical merits very much do justify the cost, but that can only happen by creating the space to find out.
Said differently: I believe it's often worthwhile to discuss the deeper technical merits of options without hastily jumping to a cost discussion.
I’ve never worked with anything that made me want to quit software entirely more than React Native, and didn’t feel that way at all about Flutter. Maybe it’s just me but I feel like the developer experience of Flutter is much more consistent.
I worked on a React Native reject in 2019, and decided to exit mobile dev and focus just on web. My frustration at the time was the bonkers reliance on two package managers (npm and cocoa pods) and having many, many problems that had to be resolved in some quirky way in XCode or Android studio. I didn’t like how much native code had to be copied and pasted around to integrate SDKs. At that stage you may as well bite the bullet and just learn native.
I had a very similar experience. It seems like they’ve just stacked too many different complex pieces of software on top of each other & the result is sometimes unpredictable and difficult to debug behavior that can pop up in any layer of that stack.
Add to it the ecosystem of libraries which are virtually non existent for Flutter.
Even Google (pioneer of Flutter) doesn’t have all their standard SDKs ported for Flutter.
We played around with Flutter for a total one day before ditching the idea altogether (in favour of native Android with Kotlin)
> We played around with Flutter for a total one day before ditching the idea altogether (in favour of native Android with Kotlin)
I've made several apps in Flutter and so far I'm actually happy with it. I look forward to the next project. I can easily understand why it's not for everyone, as it's true that not all standard SDK's has been ported.
And of course, not all projects are suited for hybrid solutions in any case.
Someone pointed this out to me on Twitter this morning and it's an aspect I didn't even think about. Most third-party SDKs have a JS version which generally (not always) works with React Native. Can't really say the same for Dart, so you're almost always doing a native integration or building it yourself.
I think it is all fine. People who seek personal and/or professional growth would learn new tech stacks and apply interesting ways. A lot of people who would rather stay on current stack and take technology as something that just pay bills is also good choice.
It is only the incessant talk that try to prove past/present choice of technology as "objectively better" seems unnecessary to me.
This also reminds when Sun certified Java developer was the way to hire Java developers. With so many people got this certificate, it became so bad that it was more likely that Sun certified developer would be worse as they put all effort in getting certificate and not really learning programing.
I develop/maintain 2x2 native apps (two different apps, with 2 different native versions each). When I started this years ago, the cross platform stories weren't as prevalent, and we needed really strong BLE integration (not just a casual shim library). So we did the native thing.
Reading various forums, it almost feels like no one does the iOS/Android native app dance anymore. Is that a vocal minority or is that what's really going on? Or are there still reasons to do native (Kotlin/Swift) apps?
For me, Flutter & Dart were badly needed solutions to the traditional way of developing Android apps. There's really no need to do the "native app dance" anymore, it's costly unless you really want to target a specific Android device.
Having said this the article does bring up an important issue, in particular, it's tougher to find Dart developers and you would most likely need to spend 6~8 months for a new team to get into the groove...BUT
React Native is a giant mess. It's clunky and slow and throwing Javascript/Typescript at everything has been problematic and it's clear that the performance lags behind Flutter by a large factor.
If I had to choose, I would take on the tool that offers the best development experience and a large chunk of that comes from the debugging, and it just so happens Google hit the nail——NullPointerException is one of THE biggest risks of using other toolsets.
So If I had to bet here, I would put it on Flutter/Dart. It is essentially a love child of ES6/Java/Typescript and it just hits so many pain points coming from React Native.
I really do think React will become the Java Swing of our generation, yes you've done everything right but it seems you need to keep up with new trends and the usage of hooks is really annoying and unintuitive.
In fact, here on the web front, I think Backend-as-Frontend where we persist application state in websocket will be a game changer. On the mobile front, Flutter clearly scratches the itch for both newcomers, react native developers but also backend-as-frontend really gets rid of the need to maintain two separate code bases (one for react and the other for backend).
I think a web app that loads via Backend-as-Frontend framework that Flutter talks to like a regular joe REST API will be the paradigm shift.
I would love to hear what others think of my views as I'm curious as to how my bets will pay off.
It's too bad there's not a way you can query apps and figure out what they're "made of". Then you could gather some real numbers on just what the makeup is of today's app offerings. It could be like a nutrition label. :)
The first is the claim that Flutter is one of the most popular native apps. I haven't seen or heard of anyone actually using it. Flutter and Dart barely exist. After React native, the next biggest would probably be just plane old native, or electron.
The second is their claim that google has a great developer experience. I legitimately have never had any google product with a good developer experience. They always have a painful developer experience compared to Facebook products.
The argument applies equally well to React vs Vue. Ultimately it doesn't really matter if you slightly prefer the ergonomics or API of one over the other. There is simply no argument whatsoever from a standpoint of community support, tooling, documentation, libraries, and hiring pool that React has a 10x advantage over Vue. If it were the other way around, I would just as happily use Vue. But it isn't, so I use React.
Perhaps If you are building a app optimizing for high performance where it is needed (using 3D, trading apps, etc) sure. But in 'all' cases? You would benefit better with the large ecosystem of officially 'supported' TypeScript libraries available in React Native which most companies simply would just use over unofficial ones or the very high likelihood of no equivalent libraries to use on the Flutter side, even if you are migrating a complex app.
Flutter is getting there but as for the library ecosystem, it's not there yet for now; especially for Desktop. If it does get there, at least we should be able to see the first of Flutter desktop apps like Rows [0] to replace Electron on the desktop, unless somehow React Native for Desktop gets there which that is even far behind.
It's a lot easier to catch a fish in a small pond if you're the only one fishing. Was a lot easier to hire a competent F# developer than it was to hire a competent C# developer.
No one ever got fired for buying IBM. Technology that is unknown to the buyer is significantly harder to sell. You will lose sales because of your exotic technology choice. If you told me this 4 years ago, I'd have argued that the implementation details don't matter but it turns out that they do.
I really enjoy the dev experience of Flutter. The tooling is great and Dart was easy to pick up with my experience with C#. The various batteries-included features are appreciated as well.
Unfortunately, the lack of "Code Push" killed several attempts of using Flutter on various projects for us.
ReactNative has been good to us but I really wish there was a bit more built-in for commonly needed features.
Maybe things have changed since I last used it, but the styling was very boilerplatey. I didn't enjoy using it. I really liked Dart, was easy to learn.
Is there some sort of template-library/style-system that y'all are using? Or is it still one widget in another widget in another widget..?
Because if that latter then optimising for some syntactic sugar over that would be very much a game changer.
A comment on hiring and training. I have work with React, web, RN, and Flutter and I can say the biggest training part is always on the platform/framework part not on the language.
So would it be easier to train kotlin/swift developer to learn new mobile framework; or train js/React developer to learn a new platform?
What I like about Flutter, is that I can develope cross platform without Javascript, even though Dart has dome weird stuff its still better that typescript and javascript, but thats just my opinion.
I wish Flutter would die in a fire! The web version renders everything in a canvas. This sucks! It means, no reading of text because no DOM to read for a11y, no standard context menus (so no OS/browser level spell checking, word lookup, copy and paste, password generation), poor slow 2nd class support for non-roman languages (it has to do it's own font rendering so it asynchronously downloads/rasterizes non-ASCII code points which means new characters show up as placeholders and only appear 500-1000ms later. I'm sure I could go on and on about how bad that decision is.
If you're smart enough to learn Flutter, you're smart enough to learn Swift/Kotlin.
So, just do that.
The appeal of React Native is that you get to stay in the javascript ecosystem. If you're going to move away, move away to native.
It's a no brainer. The very premise of these cross platform tools is flawed, frankly - if you can't afford one dev per platform - your problem isn't tooling, it's money.
So, just fix the money issue.
Also, as an iOS dev - the second I see mention of Rx, Flutter, React Native or some other third party 'thing' - I don't even bother applying. Apple/Google produce enough new 'things' every year to keep everyone busy. I don't need some third party 'thing' on top of it all.
Hiring is certainly very important. But I disagree that JavaScript developer availability is the most important criteria in this decision. After all, we're talking about "quality developers" not script kiddies. Experienced developers will hit the beach running with Dart. It's just another C-style language like many others that experienced developers will have used extensively.
What's very interesting about your comment is this:
Let's say that JavaScrip was a language for "kiddies" in 2008. Let's call them people in the age group 17-22. That was fourteen years ago, and those "kiddies" are now 31-36.
In 2008, they were in entry-level positions. Today, they have over a decade of industry experience and are lead engineers, engineering managers, &c. It's not like they "grew up and switched to adult languages."
They brought JavaScript with them, and as they recognized its shortcomings... They fixed the language and its ecosystem.
(For some debatable definition of "fixed," but the general point is that even if it was a language for script kiddies a decade and a half ago, those kiddies grew up and to a certain extent, the language grew up with them.)
Exactly, add another decade and we're talking about the original Script Kiddies being seasoned executives, or possibly even having cashed out a startup or two and are now investing in startups via the Founder-to-VC pipeline.
Calling Dart "C-style language" sounds very weird. The curly-brace syntax is the only parallel I can draw between the two... What other similarities do you see?
In a rant that was (at one time) legendary, the late Erik Naggum responded to the argument that XML was nothing more than another way to write s-exprs, and if you like s-exprs, the only quibble you have with XML is the syntax.
Erik wrote, They are not identical. The aspects you are willing to ignore are more important than the aspects you are willing to accept. He went on to write a great deal more that I am not willing to quote, but at its core was this same argument:
Sometimes two things seem very similar, but the ways in which they are similar are unimportant, and the ways in which they differ are profound.
My guess is that little aphorisms like this will come into and out of fashion over and over again, because this argument is one we will have over and over again.
The expectation that another company will train your workforce, or that they'll train themselves in their spare time before joining, massively inhibits the proliferation of different, new, and interesting technologies and keeps 90% of the devs in the web and mobile world on the same 3 platforms (JS/TS, Swift, and Kotlin).
Unless companies start hiring to mentor people this will not change, and probably TS will just eat all the others in the long run.