Hacker News .hnnew | past | comments | ask | show | jobs | submit | Bewelge's commentslogin

Can you elaborate on "resistance against cuda"? What were people clinging to instead?

IMO it was mostly that people didn't want to rewrite (and maintain) their code for a new proprietary programming model they were unfamiliar with. People also didn't want to invest in hardware that could only run code written in CUDA.

Lots of people wanted (and Intel tried to sell, somewhat succesfully) something they could just plug-and-play and just run the parallel implementations they'd already written for supercomputes using x86. It seemed easier. Why invest all of this effort into CUDA when Intel are going to come and make your current code work just as fast as this strange CUDA stuff in a year or two.

Deep learning is quite different from the earlier uses of CUDA. Those use cases were often massive, often old, FORTRAN programs where to get things running well you had to write many separate kernels targeting each bit. And it all had to be on there to avoid expensive copies between GPU and CPU, and early CUDA was a lot less programmable than it is now, with huge performance penalties for relatively small "mistakes". Also many of your key contributers are scientists rather than profressional programmers who see programming as getting in the way of doing what they acutally want to do. They don't want to spend time completely rewriting their applications and optimizing CUDA kernels, they want to keep on with their incremental modifications to existing codebases.

Then deep learning came along and researchers were already using frameworks (Lua Torch, Caffe, Theano). The framework authors only had to support the few operations required to get Convnets working very fast on GPUs, and it was minimal effort for researchers to run. It grew a lot from there, but going from "nothing" to "most people can run their Convnet research" on GPUs was much eaiser for these frameworks than it was for any large traditional HPC scientific application.


Thanks!

It seems funny though: The advantages of GPGPU are so obvious and unambiguous compared to AI. But then again, with every new technology you probably also had management pushing to use technology_a for <enter something inappropriate for technology_a>.

Like in a few decades when the way we work with AI has matured and become completely normal it might be hard to imagine why people nowadays questioned its use. But they won't know about the million stupid uses of AI we're confronted with every day :)


> The advantages of GPGPU are so obvious and unambiguous

I remember being a bit surprised when I started reading about GPUs being tasked with processes that weren't what we'd previously understood to be their role (way before I heard of CUDA). For some reason that I don't recall, I was thinking about that moment in tech just the other day.

It wasn't always obvious that the earth rotated around the sun. Or that using a mouse would be a standard for computing. Knowledge is built. We're pretty lucky to stand atop the giants who came before us.

I didn't know about CUDA until however many years ago. Definitely didn't know how early it began. Definitely didn't know there was pushback when it was introduced. Interesting stuff.


I'm dealing with someone in 2026 insisting that everything has to be written in Python and rely on entirely torch.compile for acceleration rather than any bespoke GPU kernels. Times change, people don't.

The completely low information and amateur hour aspect of what our HPC Welfare Queens were pushing above was that a couple hours invested into coding Intel's Xeon Phi alternative to GPUs demonstrated the folly of their BS "recompile and run" strategy and any attempt to code the thing exposed how much better a design CUDA was than their series of APIs of The Month that followed*. And I was all but blacklisted by the HPC community over standing up to this and insisting on CUDA or I walk, my favorite quote was "You lack vision and you probably wouldn't have backed the Apollo program or Lewis and Clark." Good times, good times...

*But TBF Xeon Phi was not a complete disaster for if you coded it in assembler you could squeeze out Fermi class GPU performance. Good luck getting the "recompile and run" crowd to do that though as they segued from that to relying on compiler directives going forward and that's how NVDA got a decade+ headstart that should never have happened, but did. Today a lot of these sorts are insisting that because of autograd, everything should be written in Python and compiled with an autograd DSL like torch. I am so glad I am close to retirement on that front. I already trust coding agents more than I trust this mindset.


Phi was cool, I think it could have been leveraged into something great. Imagine all consumer CPUs coming with 512 little pentiums in them or something like that.

And ahead of GPUs in some ways at the time. But that was entirely squandered by their idiotic recompile and run marketing. There was some serious denial that thread blocks that could synchronize without thunking back to the CPU along with the intuitive nature of warp programming were pretty much a hardware mode against anything that couldn't do the equivalent.

But good luck explaining that to technical leaders who hadn't written a line of code in over a decade and yet somehow were in charge of things. People really need to consider the backstory here if they want to do better going forward, but I don't think they will. I think history is going to rhyme again.


In the beginning, valid claims of 100x to 1,000x for genuine workloads due to HW level advances enabled by CUDA were denied stating that this ignored CPU and memory copy overhead, or it was only being measure relative to single core code etc. No amount of evidence to the contrary was sufficient for a lot of people who should have known better. And even if they believed the speedups, they were the same ones saying Intel would destroy them with their roadmap. I was there. I rolled my eyes every single time but then AI happened and most of them (but not all of them) denied ever spouting such gibberish.

Won't name names anymore, it really doesn't matter. But I feel the same way about people still characterizing LLMs as stochastic parrots and glorified autocomplete as I feel about certain CPU luminaries (won't name names) continuing to state that GPUs are bad because they were designed for gaming. Neither sorts are keeping up with how fast things change.


I also find these things incredibly annoying. But I've been actively working in webdev the past couple of years so I was actually keeping up with stuff. And I still consider this a cheat-code.

It makes it so easy to cut through the bullshit. And I've never considered myself scared of asking "stupid" questions. But after using these AI tools I've noticed that there are actually quite a few cases where I wouldn't ask (another human) a question.

Two examples: - What the hell does React mean when they say "rendering"? Doesn't it just output HTML/a DOM tree and the browser does the actual rendering? Why do they call it rendering? - Why are the three vectors in transformer models named query, key & value? It doesn't really make sense, why do they call it that?

In both cases it turns out, the question wasn't really that stupid. But they're not the kind of question I'd have turned to Stackoverflow for.

It really is a bit like having a non-human quasi-expert on most topics at your fingertips.


Just a guess, but to me it sounds like you're trying to do too much at once. When trying something like this:

> lots of careful code of registering and unregistering callbacks, some careful thread synchronisation (callbacks get called in another thread), thinking about sane exception handling in async code. Fiddly but not rocket science.

I'd expect CC to fail this when just given requirements. The way I use it is to explicitly tell it things like: "Make sure to do Y when callback X gets fired" and not "you have to be careful about thread synchronisation". "Do X, so that Exceptions are always thrown when Y happens" instead of "Make sure to implement sane Exception handling". I think you have to get a feeling for how explicit you have to get because it definitely can figure out some complexity by itself.

But honestly it's also requires a different way of thinking and working. It reminds me of my dad reminiscing that the skill of dictating isn't used at all anymore nowadays. Since computers, typing, or more specifically correcting what has been typed has become cheap. And the skill of being able to formulate a sentence "on the first try" is less valuable. I see some (inverse) parallel to working with AI vs writing the code yourself. When coding yourself you don't have to explicitly formulate everything you are doing. Even if you are writing code with great documentation, there's no way that it could contain all of the tacit knowledge you as the author have. At least that's how I feel working with it. I just got really started with Claude Code 2 months ago and for a greenfield project I am amazed how much I could get done. For existing, sometimes messy side projects it works a lot worse. But that's also because it's more difficult to describe explicitly what you want.


> The way I use it is to explicitly tell it things like: "Make sure to do Y when callback X gets fired" and not "you have to be careful about thread synchronisation". "Do X, so that Exceptions are always thrown when Y happens" instead of "Make sure to implement sane Exception handling".

At this point I'm basically programming in English, no? Trying to squeeze exact instructions into an inherently ambiguous representation. I might as well write code at this point, if this is the level of detail required. For this to work, I need to be able to say "make this thread-safe", maybe "by using a queue". Not explaining which synchronisation primitive to use in every last piece of the code.

This is my point actually. If I describe the task to accuracy level X, it still doesn't seem to work. To make it work, perhaps I need to describe it to level Y>X, but that for now takes me more time than to do it myself.

There's lots of variables here, how fast I am at writing code or planning structure, how close to spec the things needs to be, etc. My first "vibe code" was a personal productivity app in Claude Code, in Flutter (task timing). I have 0 idea about Dash or Flutter or any web stuff, and yet it made a complete app that did some stuff, worked on my phone, with a nice GUI, all from just a spec. From scratch, it would take me weeks.

...though in the end, even after 3 attempts, the final thing still didn't actually work well enough to be useful. The timer would sometimes get stuck or crash back down to 0, and froze when the app was minimised.


I'm reminded of the notion of "komolgorov complexity" here. There might be some tasks for which a short natural language description is sufficient, and others for which a sufficiently formal description is needed to the point that it's easier to actually write the code than describe it in English.

> At this point I'm basically programming in English, no?

Yea, except they can handle some degree of complexity. Its usefulness obviously really depends on that degree. And I'm sure there are still a lot of domains and types of software where that tradeoff between doing it yourself or spelling it out isn't worth it.


I'm not sure I agree with your analysis.

It's true that branding gains importance as a differentiator when the product moves towards being a commodity or if its innovation has reached its peak . But I disagree that it's always the driving factor for how a company is structured.

There are some industries where the franchise model is most dominant (eg fast food chains) and some others where tight control over the value chain is the norm (eg apple). And funnily enough, one of the distinctive differentiators those luxury swiss watchmakers have, is: "We make every part ourselves. Everything is manufactured in Switzerland.". Then you have the OG multinational companies like P&G or Unilever, where branding definitely plays an important role but a lot of their brands are regional so it's yet another structure.

I really think it's mainly the specific industry and geopolitics that shape supply chains and not the branding.

> I'm speculating now, but it seems really likely if that type of distributed supply chain didn't exist, we'd see a closer coupling of design and manufacturing (at an extreme end factories designing and producing their own product).

It's an interesting question. I think it's safe to say that the main driver to outsource manufacturing was cost. But nowadays companies also benefit from things like reduced accountability. Even if we assume every part of the supply chain could be done within the US, wouldn't companies like Apple still eventually outsource the ugly parts of the supply chain to some third party within the US? Simply for the appearances.


> That’s exactly my point. If a problem exists or a solution to a problem is possible about which no party is willing to talk about, it will not show it. The choice of topics is based on party opinions, not on voter surveys. This is called bias.

It's not a bias. I think you're misunderstanding the meaning of the word. We vote for parties and the wahl-o-mat compares the opinion of these parties. It would be biased if it would give an unfair advantage or disadvantage to one or some parties. The parties' opinions not being a good representation of the important challenges we face is not a bias of the wahl-o-mat. You can question its usefulness but calling it biased is calling it dishonest.

And honestly? That sucks. You can question its usefulness but please don't call an initiative that aims to give the populace a better political overview biased for no reason. You're not helping shape political discourse for the better here. I just checked: There weren't any no google results for "wahl-o-mat bias" yet. Now there are.


It gives unfair advantage to incumbent parties by shaping the political agenda and manipulating public opinion. When you look at the housing crisis in big cities and what Wahl-O-Mat displays as the options on the table based on political programs, it is very easy to think that the selection of options is exhaustive. Public is effectively pushed into discussing only those options, none of which is a good solution. Yet solutions exist, the only problem: political center is dead and they do not fit into populist right or populist green/left platforms. At least half of the items on Wahl-O-Mat is feel-good populism scoring points for one or another party, not least because scoring points, not real change is for most of them the primary objective. And the tool simply reflects that, because German political system is designed to stabilize status quo, not to challenge it.

Edit: the word „Bias“ may not exist in Google search, but this topic is certainly discussed in German language space and it is easy to find:

https://www.zeit.de/kultur/2025-02/peter-mueller-bundesverfa...


I did do a German search as well. And bias in that case would be something like "parteiisch" "bevor-/nachteilen". In 2019 a court did say it gives an advantage to incumbents because you had to limit your results to 8 parties. They've changed that since. The article you've linked is talking about its usefulness in general and not that it gives unfair advantages to anyone.

Honestly, I'm not a big fan of the thing myself and I agree that most of your proposals would improve the tool. But calling it a tool that compares political parties biased makes it sound deceptive. And I don't believe that it's deceptive. It's just ineffective.

The fact that it gives an "unfair advantage to incumbents" also isn't really a bias of the tool. It really just is the most obvious fair way to structure it. Large parties are large because a large amount of people vote for them. So they obviously have more visibility.

But I also agree that the majority rule of an ageing population in Germany is problematic. With most incumbents seemingly agreeing to not talk about the future.


Bias: A preference or an inclination, especially one that inhibits impartial judgment.

Let’s say there are parties A, B and C. A and B offer something populist solutions Sa and Sb to very important problem X, C does not promise anything on it, but offers solution for less important problem Y (single issue party). If voter thinks that Sa and Sb are the only choices, they may choose A or B, because it feels right. If voter was informed that neither Sa, nor Sb solve the problem, but some other solution Sx not supported by any party would do it, party C could become a reasonable choice - at least you get something done this way. This would be impartial judgement. Wahl-O-Mat inhibits this scenario, so yes, it is biased, it is deceptive.


That's not true. I just now went through the Wahl-o-Mat for Baden-Württemberg (because there are elections there this Sunday, March 8). From the 38 questions, there are two that directly address the housing crisis:

> Die Mietpreisbremse in baden-württembergischen Städten und Gemeinden soll abgeschafft werden

> Länger leerstehende Mietwohnungen sollen ihren Eigentümerinnen und Eigentümer konsequent entzogen und vermietet werden.

Why do you think those options are not good options to address the housing crisis? Are there other levers? Of course, but Mietpreisbremse (freezing rents) and Leerstand (empty houses) are certainly key levers. And the point of the Wahl-o-Mat is to give you an indication over overlap, not to list and let you decide on single political initiatives.


> That's not true

What exactly is not true?

> Are there other levers? Of course, but Mietpreisbremse (freezing rents) and Leerstand (empty houses) are certainly key levers.

They are not the key levers, which everyone with basic knowledge of economics should understand immediately.

Regardless of whether certain segment is operated by market, by regulation or by combination of two, it is always about supply matching the demand. If you have more demand than supply, either prices will increase or you enter the state of deficit, where non-market mechanisms will decide who will get home and who will not. In either case prolonged state of reduced supply leads to crisis. Neither of those levers does solve problem of supply reliably. There’s simply not enough empty homes on the market to consider efficient management of them a solution. Rent controls also do not help here, because they actually reduce supply, by creating black market of subleases and reducing mobility (people stick to their regulated contracts).

The solution to the problem requires multiple reforms at once, which will fix short term and long term supply:

1. Dramatically deregulating supply side and reducing NIMBY influence. Project approvals should not take 7 years as in Berlin Friedrichshain and should not be blocked by survival of some rare toads like in Berlin Pankow.

2. Streamlining microdistrict planning, where one project includes construction of thousands of homes along with the necessary infrastructure. May include rapid construction tech (Plattenbau 2.0). It is not enough to build one building here and there, we need to build a lot more.

3. Shifting from majority rental to majority ownership model in which mobility is supported by market, low commissions (remove notaries, reduce taxes) and fast registration of property rights. It must be possible to sell and buy in 2-3 weeks maximum. The large landlords should be re-privatized: first, state takes over, then rentals are converted to subsidized lease deals with tenants with transferrable rights, in which tenants will effectively own their homes, while continuing to pay the current, never increasing amounts.

4. Progressive tax on property ownership kicking in from 0% on 3-5 apartments to significant percentage of rent value on 50+ apartments. The for-profit landlord business model must become unattractive, while not putting too much pressure on small landlords which are more market-friendly.

This kind of reform is neither left, nor right. It’s anti-capitalist, because it prevents concentration of capital and negative redistribution, but it is also pro-market, because it hands over the ownership to millions of people which can then freely sell and buy for their own use.


So now the value is created through curation. Before it was inherent at creation. If you never curate it might seem like it lost value in comparison.


Curation was implicit when the cost of image creation was high and authors had to consider the photos they were taking beforehand. Now curation comes afterward.


In my childhood, slide shows were very deliberately curated, in no small part because the presentation of the slides was a relatively elaborate, shared family event.


But curation was done mainly by the creators, who were the people who were able to do the creation in the first place (professional photographers, people who could afford to buy the expensive camera, people who could afford the software for editing photos/slideshows in mass etc.). Now everyone can curate, and consumers can actually pick which curated collection is truly the best.


But what does 'best' even mean in this context? A photographer sharing their 'best' photos was some combination of sharing their personal perspective and their effort to capture shared memories on behalf of others. So yeah it was a limited/privileged (often patriarchal) role. What they picked was interpretive, but that curation was part of the expression/information the viewer was experiencing.

We can mix and match the media we choose to view or keep so easily, when previously there was so much more material and opportunity cost to choosing what to shoot, develop, keep, and share. I think that inevitably loses some meaning.


I'm 99% sure that ping pong match is CGI. The whole robot has this green screen effect. Look at its feet. And at second 17 it just disappears entirely for a few frames.


> But the value of low quality communication is not zero: it is actively harmful, because it eats your time.

But a non-zero cost of communication can obviously also have negative effects. It's interesting to think about where the sweet spot would be. But it's probably very context specific. I'm okay with close people engaging in "low quality" communication with me. I'd love, on the other hand, if politicians would stop communicating via Twitter.


The idea is that sustained and recurring communication would have a cost that quickly drops to zero. But establishing a new line of communication would have a slight cost, but which would quickly drop to zero.

A poorly thought out hypothetical, just to illustrate: Make a connection at a dinner party? Sure, technically it costs 10¢ make that initial text message/phone call, then the next 5 messages are 1¢ each, but thereafter all the messages are free. Existing relationships: free. New relationships, extremely cheap. Spamming at scale: more expensive.

I have no idea if that's a good idea or not, but I think that's an ok representation of the idea.


Haha yea, I almost didn't post my comment since the original submission is about contributors where a one time "introduction fee" would solve these problems.

I was specifically thinking about general communication. Comparing the quality of communication in physical letters (from a time when that was the only affordable way to communicate) to messages we send each other nowadays.


It's not a market for lemons. We can share info about the lemons and all choose to use the good ones. There's no information asymmetry.


> We can share info about the lemons

That might turn out to be less than reliable over time, as bots are already screwing up systems with fake information and it's probably going to get worse.


I don't disagree with that, but the market for lemons still doesn't really fit.

If I remember my econ class correctly it uses used cars as an example. If you're neighbor bought a used Toyota and tells you about it being a great purchase, you can't go out and buy another used Toyota and expect it to also be in great condition. Every car is a gamble.

But if you use something like Hubspot and tell your neighbor it's really good/bad you can expect to receive the same Hubspot service they did.


German here. That's not true. What crazy documentation do you require? An ID, proof of residence, and a business plan? (edit: you don't even need a business plan)

That being said, everything about the process is annoying and you always have the feeling that you're doing something wrong or forgetting something. Together with some ridiculously slow processing times, it's the perfect combination to frustrate you and I'm sure it ultimately reduces innovation.

But in reality, getting all the paperwork together is probably a couple of hours of work. You can buy services that do it for you for a couple of hundred Euros.


> ... and a business plan?

Why would the government need a business plan?

It's none of their business what you want to do with your company besides a general description as "software development" or "consulting services" or whatever.


> It's none of their business what you want to do with your company

There are plenty of European member states that want the ability to control very precisely what you do with "your company". You want to call yourself "a software engineer"? Ooops...

In the EU it seems particularly the German-speaking countries are borderline obsessed with a) titles, and b) whom may use those titles. See, for instance, https://hackernews.hn/item?id=34096464


> it seems particularly the German-speaking countries are borderline obsessed with a) titles

There is nothing borderline about that - the German cultural space (including very much the countries of former Habsburg Empire) is still completely obsessed with titles and formal positions despite many of them losing any practical importance in modern times.


I like that you know what an Engineer is in German, and not the 1000 BS Meanings it has in the US


Actually I think I might be mistaken that you are even required to make a business plan. It's listed as one of the steps on the states portal about founding. But it goes on to say that it's not technically required, just highlights its importance.

https://www.existenzgruendungsportal.de/Navigation/DE/So-geh...


Several sectors of economic activities have the potential for atrocious externalities and it's absolutely the government's business to know about these and make sure that you're following regulation to minimize these externalities. When you make your employees the neighbours sick (or straight up kill them) it's an enormous failure on the part of government. It's easy to be oblivious to that when you only think about software.

Exhibit A: https://www.ctvnews.ca/montreal/article/battery-facility-acc...


Except it seems that it's often large companies - typically those with lots of lawyers - who seem to get away with what I can only describe as "corporate misdeads" most regularly.

"Following regulation" sounds great until it's revealed that corporate lobbyists have been helping (co-)write regulations to make sure that fair competition is quashed.


It’s interesting how people can apply thinking like “there are problems, it’s not perfect, better not to try” to government, but also be pro starting businesses


I don't know much about corporations, but why business plans are needed at all? I mean, for EU citizens.

bank (loans), immigration and investors can be interested, but their interests are not covering every corporation out there.


There’s absolutely no need to have a business plan to start a company in Germany. You articles of incorporation and they state a company purpose, but this can be something as simple as “do IT consulting”.

Obviously, having a credible plan helps if you try to convince banks to loan you money or any such thing, but the act of registering a company requires no such thing.


It's basically a proof of "most basic effort" that you're serious. You could probably note down some stuff on a single A4 and get it approved, it doesn't have to be a 40 page dossier.

Kind of like fizzbuzz, just something really simple and most basic to get rid of the "easy scams" and so on.

Edit: So "easy scams" are probably the wrong word, I initially wrote "riffraff" because in my mothertoungue that isn't so... disparaging, but what I meant was that it's used as "bare minimum filter" basically.


That doesn't really sound like a barrier to the easy scams at all. It just sounds like something someone once thought would be a good idea and now everyone has to do it because that's the process.


ChatGPT, give me a convincing sounding business plans for starting a bussiness in Germany.

Done.


How would this get rid of easy scams?


Previous HN discussion about setting up a GmbH.

https://hackernews.hn/item?id=39959368


> business plan

This is the problem. Let me pivot. Let me fail. Let my investors (including myself) lose time and money in bad ideas.

All the bureaucracy in the world didn’t stop Wirecard, but it sure as heck demotivated people from trying something new in Germany.


There is no problem, because no business plan is required.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: