Hacker News new | past | comments | ask | show | jobs | submit login
The perceived speed of computers (datagubbe.se)
43 points by doener 4 months ago | hide | past | favorite | 25 comments



The one-bit explanation for why computers feel so slow these days I have is that everything is now a web app.

The instant you introduce network calls that touch remote boxes, you have added a consciously-noticeable amount of lag to the operation. Yes, there are also programs which feel slow even when doing things entirely locally, but that's in a different ballgame.

Why is everything a web app these days? Many reasons. One that doesn't get enough airplay is that it's simply much harder to pirate a web app. This is not the dominant explanation, though, the real one-bit answer is non-technical end users don't have to install anything with a web app to do what they want and that massively improves one's sales funnel.

A decade plus of being able to get paid for web apps much more reliably than for local desktop applications has led to the supply of web developers increasing, and the supply of desktop GUI developers decreasing. So eventually we have even seen web app tech get retrofitted back into the traditional application market, because that's what the economics suggest is the cost-effective approach. The whole churn is fascinating to watch play out.


Web apps would be fine, except that most "web" developers are actually JavaScript front-end developers. They outnumber the older generation of C++/Java/C# developers by an order of magnitude. They have no idea was "fast" looks like, in any sense. They've never experienced it, never created it, and wouldn't know where to begin if asked. These are people that think that GraphQL is what optimisation looks like.

Compounding this is the current microservices fad combined with dockerisation of everything on top of virtualisation on top of layers of cloud virtualisation. Then someone thought: Not enough layers, let's add side-cars!

Here's a simple lesson: The typical web app is single-threaded for one user, even if it is asynchronous and has multiple servers and layers. That means that at every instant in time, at most one core is "lit up" and doing useful work for that user. While data is "on the wire" between servers, zero cores are doing useful work.

Effectively, modern web apps provide some fraction of one core of computer power for each user. The more network hops, the more load balancers, firewalls, proxies, etc... there are in the path, the smaller this fraction is, eventually approaching zero.

No amount of "scale" will make the experience of ONE user better.

This just hasn't sunk in. People look to FAANGs for best-practices guidance, but FAANGS care about billion-user scalability, which is a very unique and special problem. Their solutions are all about scaling out, and almost never about making one user's experience "scale up".


I think that's part of it. But there's also a lot of badly designed software around. On the web it's not just the network. It's bloated, easy to misuse, frameworks that cause every small operation to take millions of cycles. In the backend I've seen all sorts of "slow" from taking hours to aggregate millions of records (where you should be able to do a billion in less than a second), to things slowed down by choice of wrong technologies or languages, to ridiculous overheads of wrangling data in and our of various serialized formats and passing it around with heavyweight stacks.

What drives me crazy ;) is that many software developers have no sense of what is a "reasonable" amount of time/resources to accomplish certain workloads. If they can't even tell or have never seen anything that's high performance then they just think this is normal. No sense of, how many operations per second a CPU can perform, memory and disk performance etc. The slow performance is just normalized in a team.


I work on a platform team at a SaaS company, and one of our core drivers is holistic application performance. However we’ve heard everything from “it’s fast enough” to point to what I would consider “wrong-headed” react docs to push back against suggestions on increasing performance. The one that really gets me though is that they’ll “do another pass, but have to get this out now.” Of course that second pass never actually materializes.

Right now, it takes almost a full second for our TTFB.


Counterpoint: both Xcode and Visual Studio are much slower for certain things than Visual Studio Code (most notably: startup time).

Application performance is less a technical problem than an organizational and cultural problem (at least if performance has become so bad that it starts to be annoying - e.g. I'm sure at some point in the past both Xcode and VS started much faster than now, and on much slower computers).

If the team cares about performance they will find ways to improve it no matter the technology they use, they "just" need to make time for performance investigation and optimizations. There's usually a lot of low hanging fruits to pick before switching to a more efficient tech stack will enable further improvements (and at that point you're probably deep in diminishing returns territory).


> that everything is now a web app.

Notepad is not. The new version in Win 10 needs 3 to 5 seconds to start.

Calc.exe is also not a web app. It also needs 3- 7 seconds to start.

Today is it normal to see black region on a local program because drawing on the screen is a hard, unsolved problems.

All what matters is profit.


Everything is a web app these days because many developers are incompetent and don’t know how to do anything else.

And that’s only going to get worse as Generative AI comes along to “help” people do development.


I spent a lot of time on the early 90s looking at building faster graphics hardware - started out grabbing a lot of data.

#1 thing we figured out is that 'raw performance' and 'perceived performance' are two very different things - we were selling raw performance, customers were buying perceived performance - perceived performance is sort of an S-shaped curve, on the left there's a flat area where it's too slow, increase it a bit, it's still to slow, in the middle there's an area where performance changes are very noticeable, and on the right there's another flat area where it's fast enough.

Getting to that middle bit first was where we made our money, once we hit the rightmost bit we were mostly waiting for the competition to catch up.

Anyway the real point I'm trying to make here is that what you're aiming for is that 'fast enough' spot, not 'fastest' - for everything, it's worth spending time benchmarking to figure out what is slow (spending the time watching someone else use it is time well spent)


Yes, I totally agree. Raw vs perceived aka deceiving performance.

My advice: work with an old Apple computer and put this against a brand new super giga overclocked Windows hardware monster. Apple focuses exactly on perceived performance first and foremost. On all my devices - MacBook Pro 13 from 2013, MacBook Air 2018, iPhone 6 - one thing is always smooth no matter what: UI interaction and transitions.

I read a lot about Apple and Google engineering and I remember that they put a priority on perceived performance first. I think there is a 300ms seconds threshold under which optimization brings in no additional benefit, however massive discomfort above this limit in that humans perceive animations as jerky, which leads to feelings of discomfort.

Jobs demanded that Apple Safari had a one or two second rule AFAIK, so that any new feature must not slow down opening up time.

Displays and chip design is also aimed at this perceived performance metric.

Windows always appears jerky and laggy to me, no matter the age or raw performance the machine got, while any Apple device is guaranteed to feature smooth UI interaction no matter CPU load.

I love it and as a product manager it became a mantra for my enterprise software SaaS platform a mantra as well. Customers love it, they feel taken care of. And that’s great.


FWIW we were building some of the first mac graphics accelerators when we were doing the above work - 24-bit graphics were new, and too slow to be usable on the original Mac 2s (that too slow plateau) - in part it was because the software guys were pissing it away - Microsoft in particular did stupid stuff - sometimes excell would erase a window 7-8 times before it ever drew any useful pixels


For direct user interactions (such as opening a combo box) the threshold of perception is actually much closer to 30ms than 300ms. 300ms is usable but far from a pleasure to use.

I found myself interacting with a Windows Server 2008 VM the other day (don't ask!) and was astonished how snappy the UI was, relative to today's machines (Windows or Mac).


This is such a common frustration for me that I sometimes share to family, friends, and partner. They have a hard time understanding why I think it is such a big problem.

I think people perhaps have gotten to good at accepting the gruel they get, rather than the feast they could have. My partner tries to get me to be less frustrated all the time. She is right, of course, that this is not a helpful emotion. But I cannot concede and just accept gruel like this. Ordering plane tickets is something I dread every time.

Anyone have good ways of keeping the energy to improve and not let this affect you, despite it doing?


I firmly believe there is no good reason for a UI running on modern hardware to ever feel slow or unresponsive. The application should ensure that any heavy processing is done on a separate thread that doesn’t impact the UI’s ability to respond to user interaction. “Heavy processing” is anything that could take more than 100ms, which includes most network requests. Give the user feedback that the processing is happening, but don’t let that interfere with basic UI operations like navigation, or moving the window around, or exiting the application.

This applies to local apps and web apps alike. No excuses!


My most annoying daily slowness is opening Microsoft To Do on my iPhone. Five seconds to first render of the todo list. Why? It still hasn't synced! (And the sync isn't transparently communicated, another big flaw)


Don’t mention Office apps. Even though you selected “work offline” for certain files, load time is so brutally high.

MS really hates their users, I think to myself sometimes, in the sense, that they do not get how much time you lose opening up an app every day and waiting for a simple spreadsheet to pop up, while on the other hand people would rave if only this was a damn fast process.


And atlassian ones.

I mean, we have a guy who is really into scrum in our team, and is always chasing us to update, create or move our tickets to the correct queue/status/whatever. As engineers we are so tired of waiting for Jira to respond that we usually just give up midway after clicking somewhere and work on our tasks then forget about that tab and never complete what we meant to update on Jira.


There are good JIRA installations, and bad ones. Good ones are speedy and responsive, even if you have a large amount of data you’re working with, and there are lots of tickets in the system.

I was on the team that managed one of the JIRA installations at Whole Foods, before the company got bought by Amazon. I remember what a good JIRA installation looks like.


The trick is to automate that stuff imo. Having devs push buttons in jira is stupid.


Tesla App on my old Android phone beats your Microsoft To Do ;)


Desktop apps should be C++/QT, not javascript.


If the team doesn't care about performance, they will just as easily create a low-performing application in C++/Qt as they would with Javascript/Electron. The problem isn't mainly technical, but cultural/organizational (for instance chasing new features instead of improving existing code).


Strongly disagree.

Apples to apples, C++/QT will always beat the shit out of JS/Electron.

The real problem here is computers HAVE gotten too fast, so lazy shit like electron is acceptable as 'good-enough'


Dupe of https://news.ycombinator.com/item?id=40604972 from 7 days ago.

Perhaps the poster's perceived internet speed is also slow! :D


In German, we have the word Schwuppdizität (perhaps "snappiness") to describe the perceived operating speed of a computer. Very important to distinguish from actual FLOP/s number or similar measures.


For almost any task you can choose something that’s easy to install (maybe a web app!), has lots of features, but is slow. But you can usually opt for something extremely snappy and bare bones. Software is all about tradeoffs but there is choice.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: