Hacker News .hnnew | past | comments | ask | show | jobs | submit | jakkos's commentslogin

Every piece of KDE software I've tried has been buggy to the point that it's now a red flag to me: Spectacle (silently failed to copy/paste), krunner (refused to close), SDDM (refused to login), Dolphin (ffmpegthumbnailer loops lagged out whole system, SMB bugs), System Monitor (wrong information), KWallet (signal fails to open, data loss)


I am sure that people who use KDE can politely respond to your critique but I can say this that I used to use Kate for sometime and its really great.

Fun fact but Asahi Linux creator uses kate :)


I have had problems with Spectacle related to permissions on Wayland and I think I experienced the failed to copy bug once.

I have not had any other significant problems for some years - not since KDE4. I do not use SMB but everything else works fine and KDE is my daily driver.


Been using KDE for years and never had any of these problems.


Copy pasting images is often hit & miss.

Sometimes I have the image copied but it doesn't paste in the browser. However it can be pasted to GIMP. If I paste it there and copy it from GIMP then I can paste it to the browser.

So who's fault is that? Spectacle's or browser's? Maybe wayland's?


Google sold the company to Lenovo in 2014


> It sounds like Blade is a cross-API graphics engine, by one of the original gfx-HAL (former QGPU name) creators?

My understanding is that wgpu has a lot of constraints and complexity imposed on it by all the backends it has to support (especially WebGPU) and that Blade is meant to be a much simpler closer-to-metal api for people who want more control (and know how to not shoot themselves in the foot)

> I would be using eframe instead of WGPU as the backend.

Do you mean using egui-wgpu directly rather than through eframe? The default backend of eframe is wgpu (it used to be glow/opengl), and you can still use callbacks to directly render whatever you want with wgpu in an eframe app

> EGUI and WGPU have great integration

Can confirm, it was stupid simple to integrate egui into my wgpu gamedev project

> Am I missing something about Zed? I have tried and failed to get into it.

I also tried Zed after getting annoyed at Helix a few times, and thought "oh cool, this is like vscode but fast and even has a helix mode!" but then didn't find any killer features worth abandoning the synergies of having an all-in-terminal-workflow over


I appreciate the details!

> Do you mean using egui-wgpu directly rather than through eframe? The default backend of eframe is wgpu (it used to be glow/opengl), and you can still use callbacks to directly render whatever you want with wgpu in an eframe app

I apparently don't know how eframe works... had no idea it used a GPU at all or WGPU under the hood... I assumed it was just the default you use if making a 2d-only program.

Re zed... I think I am too addicted to IDE functionality to be comfortable without. I had assumed Zed could do it, but have now concluded it can't. And/or I can't figure out how to use the LSP features reffed here, or they are well-hidden.


I often see people lament the lack of popularity for D in comparison to Rust. I've always been curios about D as I like a lot of what Rust does, but never found the time to deep dive and would appreciate someone whetting my appetite.

Are there technical reasons that Rust took off and D didn't?

What are some advantages of D over Rust (and vice versa)?


> Are there technical reasons that Rust took off and D didn't?

As someone who considered it back then when it actually stood a chance to become the next big thing, from what I remember, the whole ecosystem was just too confusing and simply didn't look stable and reliable enough to build upon long-term. A few examples:

* The compiler situation: The official compiler was not yet FOSS and other compilers were not available or at least not usable. Switch to FOSS happened way too late and GCC support took too long to mature.

* This whole D version 1 vs version 2 thingy

* This whole Phobos vs Tango standard library thingy

* This whole GC vs no-GC thingy

This is not a judgement on D itself or its governance. I always thought it's a very nice language and the project simply lacked man-power and commercial backing to overcome the magical barrier of wide adoption. There was some excitement when Facebook picked it up, but unfortunately, it seems it didn't really stick.


The compiler situation

I think people forget this. I know a lot of folks that looked at D back when it needed to win mindshare to compete with the currently en vogue alternatives, and every one of them nope'd out on the licensing. By the time they FOSS'ed it, they'd all made decisions for the alternative, and here we are.


How many people were working on the core compiler/language at the time versus Rust? This could explain it.


D has always been an handful of people.


> This whole GC vs no-GC thingy

And I am here, enjoying both. Life is good.


Can you elaborate on the points? I know nothing about D, but I'm just curious about old drama


ooooold drama. Like 2008.

FOSS: DMD was always open source, but the backend license was not compatible with FOSS until about 2017. D is now officially part of GCC (as of v6 I think?), and even the frontend for D in gcc is written in D (and actively maintained).

D1 vs. D2: D2 introduced immutability and vastly superior metaprogramming system. But had incompatibilities with D1. Companies like sociomantic that standardized on D1 were left with a hard problem to solve.

Tango vs phobos: This was a case of an alternative standard library with an alternative runtime. Programs that wanted to use tango and phobos-based libraries could not. This is what prompted druntime, which is tango's runtime split out and made compatible, adopted by D2. Unforutuntately, tango took a long time to port to D2 and the maintainers went elsewhere.

gc vs. nogc: The language sometimes adds calls to the gc without obvious invokations of it (e.g. allocating a closure or setting the length of an array). You can write code with @nogc as a function attribute, and it will ban all uses of the gc, even compiler-generated ones. This severely limits the runtime features you can use, so it makes the language a lot more difficult to work with. But some people insist on it because it helps avoid any GC pauses when you can't take it. There are those who think the whole std lib should be nogc, to maximize utility, but we are not going in that direction.


D and Rust are on the opposite sides at dealing with memory safety. Rust ensures safety by constantly making you think about memory with its highly sophisticated compile-time checks. D, on the other hand, offers you to either employ a GC and forget about (almost) all memory-safety concerns or a block scoped opt-out with cowboy-style manual memory management.

D retains object-oriented programming but also allows functional programming, while Rust seems to be specifically designed for functional programming and does not allow OOP in the conventional sense.

I've been working with D for a couple of months now and I noticed that it's almost a no-brainer to port C/C++ code to D because it mostly builds on the same semantics. With Rust, porting a piece of code may often require rethinking the whole thing from scratch.


> block scoped opt-out with cowboy-style manual memory management

Is this a Walter Bright alt? I've seen him use the cowboy programmer term a few times on the forum before.


The term 'Cowboy coder' has been around for some time. Everybody's favourite unreliable source of knowledge has issues dating back to 2011: <https://en.wikipedia.org/wiki/Cowboy_coding>


This was already a thing in Usenet days, an example from 1998

https://groups.google.com/g/comp.lang.lisp/c/tfzX3Sq96Xk/m/0...


Yeah, I just saw his posts too and picked up the term :)


It makes sense for someone who has read about D to pick up on Bright phrasing.


I think 3 things

1. D had a split similar to python 2 vs 3 early on with having the garbage collector or not (and therefor effectively 2 standard libraries), but unlike python it didn't already have a massive community that was willing to suffer through it.

2. It didn't really have any big backing. Rust having Mozilla backing it for integration with Firefox makes a pretty big difference.

3. D wasn't different enough, it felt much more "this is c++ done better" than it's own language, but unlike c++ where it's mostly a superset of c you couldn't do "c with classes" style migrations


One feature of D that i really wish other languages would adopt (not sure about Rust but i also think it lacks it, though if it has it to a similar extent as D it might be the reason i check it again more seriously) is the metaprogramming and compile-time code evaluation features it has (IIRC you can use most of the language during compile time as it runs in a bytecode VM), down to even having functions that generate source code which is then treated as part of the compilation process.

Of course you can make codegen as part of your build process with any language, but that can be kludgy (and often limited to a single project).


Arguably, most of the metaprogramming in D is done with templates and it comes with all the flaws of templates in C++. The error messages are long and it's hard to decipher what exactly went wrong (static asserts help a lot for this, when they actually exist). IDE support is non-existent after a certain point because IDE can't reason about code that doesn't exist yet. And code gets less self-documenting because it's all Output(T,U) foo(T, U)(T t, U u) and even the official samples use auto everywhere because it's hard to get the actual output types.


It is quite ridiculous to place C++ metaprogramming and D's. For one in D it's the same language and one can choose whether to execute compile time constant parts at compile time or run time. In C++ it's a completely different language that was bolted on. C++ did adopt compile time constant expressions from D though.


I'd say D's template error messages are much better than C++'s, because D prints the instantiation stack with exact locations in the code and the whole message is just more concise. In C++, it just prints a bunch of gibberish, and you're basically left guessing.


No, templates are only needed to introduce new symbols. And D templates are vastly superior to C++. D's superpowers are CTFE, static if, and static foreach.

auto is used as a return type because it's easy, and in some cases because the type is defined internally in the function and can't be named.

You would not like the code that uses auto everywhere if you had to type everything out, think range wrappers that are 5 levels deep.


Rust has procedural macros, which turn out to be a good-enough substitute for real compile-time reflection for surprisingly many use cases, though nowhere near all of them. (In particular, Serde, the universally-adopted framework/library for serializing and deserializing arbitrary data types, is a third-party library powered by procedural macros.)

Real compile-time reflection is in the works; the very earliest stages of a prototype implementation were released to the nightly channel last month (https://github.com/rust-lang/rust/pull/146923), and the project has proposed (and is likely to adopt) the goal of completing that prototype implementation this year (https://rust-lang.github.io/rust-project-goals/2026/reflecti...), though it most likely will not reach the stable channel until later than that, since there are a whole lot of complicated design questions that have to be considered very carefully.


"Powered by" is an understatement, Serde would be unusable without procedural macros. Deserializers use a ridiculously verbose visitor pattern that's completely unnecessary in a language with move semantics, it should have been a recursive descent API.

Using serde_json to accurately model existing JSON schemas is a pain because of it.

I personally find third-party deriving macros in Rust too clunky to use as soon as you need extra attributes.


> Are there technical reasons that Rust took off and D didn't?

My (somewhat outdated) experience is that D feels like a better and more elegant C++. Rust certainly has been influenced by C and C++, but it also took a lot of inspiration from the ML-family of languages and it has a much stronger type system as a consequence.


D has much better metaprogramming compared to Rust. That has been one of the only things making me still write a few D programs. You can do compile time type introspection to generate types or functions from other elements without having to create a compiler plug-in parsing Rust and manipulating syntax trees.

Rust has some of the functional programming niceties like algebraic data types and that's something lacking in D.


More like the companies that jumped into D versus Rust, D only had Facebook and Remedy Games toy a bit with it.

Many of us believe on automatic memory management for systems programming, having used quite a few in such scenarios, so that is already one thing that D does better than Rust.

There is the GC phobia, mostly by folks that don't get not all GCs were born alike, and just like you need to pick and chose your malloc()/free() implementation depending on the scenario, there are many ways to implement a GC, and having a GC doesn't preclude having value types, stack and global memory segment allocation.

D has compile time reflection, and compile time metaprogramming is much easier to use than Rust macros, and it does compile time execution as well.

And the compile times! It is like using Turbo Pascal, Delphi,... even thought the language is like C++ in capabilities. Yet another proof complexity doesn't imply slow compile natives in a native systems language.

For me, C# and Swift replace the tasks at work were I in the past could have reached for D instead, mostly due to who is behind those languages, and I don't want to be that guy that leaves and is the one that knew the stack.


> Many of us believe on automatic memory management for systems programming

The problem is the term "systems programming". For some, it's kernels and device drivers. For some, it's embedded real-time systems. For some, it's databases, game engines, compilers, language run-times, whatever.

There is no GC that could possibly handle all these use-cases.


But there could be a smoother path between having a GC and having no GC.

Right now, you'd have to switch languages.

But in a Great Language you'd just have to refactor some code.


Why would you have to switch languages? There are no languages with 'no GC', there are only languages with no GC by default.

Take C - you can either manually manage your memory with malloc() and free(), or you can #include a GC library (-lgc is probably already on your system), and use GC_malloc() instead. Or possibly mix and match, if you're bold and have specific needs.

And if ever some new revolutionary GC method is developed, you can just replace your #include. Cutting-edge automatic memory management forever.


Except there is, only among GC-haters there is not.

People forget there isn't ONE GC, rather several of possible implementations depending on the use case.

Java Real-Time GC implementations are quite capable to power weapon targeting systems in the battlefield, where a failure causes the wrong side to die.

> Aonix PERC Ultra Virtual Machine supports Lockheed Martin's Java components in Aegis Weapon System aboard guided missile cruiser USS Bunker Hill

https://www.militaryaerospace.com/computers/article/16724324...

> Thales Air Systems Selects Aonix PERC Ultra For Java Execution on Ground Radar Systems

https://vita.militaryembedded.com/5922-thales-execution-grou...

Aonix is nowadays owned by PTC, and there are other companies in the field offering similar implementations.


Look, when someone says "There's no thing that could handle A,B,C, and D at the same time", answering "But there's one handling B" is not very convincing.

(Also, what's with this stupid "hater" thing, it's garbage collection we're talking about, not war crimes)


It is, because there isn't a single language that is an hammer for all types of nails.

It isn't stupid, it is the reality of how many behave for decades.

Thankfully, that issue has been slowly sorting out throughout generation replacement.

I already enjoy that nowadays we already have reached a point in some platforms where the old ways are nowadays quite constrained to a few scenarios and that's it.


> Are there technical reasons that Rust took off and D didn't?

This talk explain why, it's not technical: https://www.youtube.com/watch?v=XZ3w_jec1v8

> What are some advantages of D over Rust (and vice versa)?

Advantages for D: Build faster, in typical programs you would need about 20 packages not 100, COM objects, easy meta-programming, 3 compilers. GC, way better at scripting.

Advantages for Rust: borrow-checker is better. rustup.


> This talk explain why, it's not technical: https://www.youtube.com/watch?v=XZ3w_jec1v8

Really good talk, I remember watching it when it came out. Elm is what got me looking into FP several years ago, a nice language it was.


> Are there technical reasons that Rust took off and D didn't?

Yes. D tried to jump on the "systems programming with garbage collection" dead horse, with predictable results.

(People who want that sort of stupidity already have Go and Java, they don't need D.)


> (People who want that sort of stupidity already have Go and Java, they don't need D.)

Go wasn't around when D was created, and Java was an unbelievable memory hog, with execution speeds that could only be described as "glacial".

As an example, using my 2001 desktop, the `ls` program at the time was a few kb, needed about the same in runtime RAM and started up and completed execution in under 100ms.

The almost equivalent Java program I wrote in 2001 to list files (with `ls` options) took over 5s just to start up and chewed through about 16MB of RAM (around 1/4 of my system's RAM).

Java was a non-starter at the time D came out - the difference in execution speed between C++ systems programs and Java systems programs felt, to me (i.e. my perception), larger than the current difference in performance between C++/C/Rust programs and Bash shell scripts.


It still is an unbelievable memory hog. It got faster though.


Go wasn't around when D was released and Java has for the longest time been quite horrible (I first learnt it before diamond inference was a thing, but leaving that aside it's been overly verbose and awkward until relatively recently).


Is Java even a "systems programming" language?

I don't even know what that term means anymore; but afaik Java didn't really have reliable low-level APIs until recently.


Depends if one considers writing compilers, linkers, JITs, database engines, and running bare metal on embedded real time systems "systems programming".


so, "yes", but with added sarcasm? :D


- It is possible to write Rust in a pretty high level way that's much closer to a statically-typed Python than C++ and some people do use it as a Python replacement

- You can build it into a single binary with no external deps

- The Rust type system + ownership can help you a lot with correctness (e.g. encoding invariants, race conditions)


Are you against copyright, patents, and IP in all forms then?


Independent of ones philosophical stance on the broader topic: I find it highly concerning that AI companies, at least right now, seem to be largely exempt from all those rules which apply to everyone else, often enforced rigorously.


I draw from this that no-one should be subject to those rules, and we should try to use the AI companies as a wedge to widen that crack. Instead, most people people who claim that their objection is really only consistency, not love for IP spend their time trying to tighten the definitions of fair use, widen the definitions of derivative works, and in general make IP even stronger, which will effect far more than just the AI companies they're going after. This doesn't look to me like the behavior of people who truly only want consistency, but don't like IP.

And before you say that they're doing it because it's always better to resist massive, evil corporations than to side with them, even if it might seem expedient to do so, the people who are most strongly fighting against AI companies in favor of IP, in the name of "consistency" are themselves siding with Disney, one of the most evil companies — from the perspective of the health of the arts and our culture — that's working right now. So they're already fine with siding with corporations; they just happened to pick the side that's pro-IP.


oh hey, let's have a thought experiment in this world with no IP rules

suppose I write a webnovel that I publish for free on the net, and I solicit donations. Kinda like what's happening today anyway.

Now suppose I'm not good at marketing, but this other guy is. He takes my webnovel, changes some names, and publishes it online under his name. He is good at social media and marketing, and so makes a killing from donations. I don't see a dime. People accuse me of plagiarism. I have no legal recourse.

Is this fair?


There are also unfair situations that can happen, equally as often, if IP does exist, and likewise, in those situations, those with more money, influence, or charisma will win out.

Also, the idea that that situation is unfair relies entirely on the idea that we own our ideas and have a right to secure (future, hypothetical) profit from them. So you're essentially begging the question.

You're also relying on a premise that, when drawn out, seems fundamentally absurd to me: that you should own not just the money you earn, but the rights to any money you might earn in the future, had someone not done something that caused unrelated others to never have paid you. If you extend that logic, any kind of competition is wrong!


let's have another though experiment:

there are two programmers. first is very talented technically, but weak at negotiations, so he earns median pay. second is average technically, but very good at negotiations, and he earns much better.

is it fair?

life is not fair.


Surely one easily see that the second programmer didn't take the first programmer talent (or his knowledge) and claimed it as their own...


Engineers man… of all the problems we see today, giving real power to engineers is probably a root cause of many.


In China, engineers hold the most power, yet the country prospers. I don't think the problem is giving engineers power, rather a cultural thing. In china there is a general feeling of contributing towards the society, in the US everyone is trying to screw over each-other, for political or monetary reasons.


I am.


Absolutely. As any logical person should be.


This is obviously false on the face of it. Let’s say I have a patent, song, or a book that that I receive large royalty payments for. It would obviously not be logical for me be in favor of abolishing something that’s beneficial to me.

Declaring that your side has a monopoly on logic is rarely helpful.


Either by democracy (more consumers than produces), or ethically (thoughts and intellect are not property), it's logical. I guess it is not logical for someone who makes money with it today.


> Pre-training is, actually, our collective gift

I feel like this wording isn't great when there are many impactful open source programmers who have explicitly stated that they don't want their code used to train these models and licensed their work in a world where LLMs didn't exist. It wasn't their "gift", it was unwillingly taken from them.

> I'm a programmer, and I use automatic programming. The code I generate in this way is mine. My code, my output, my production. I, and you, can be proud.

I've seen LLMs generate code that I have immediately recognized as being copied a from a book or technical blog post I've read before (e.g. exact same semantics, very similar comment structure and variable names). Even if not legally required, crediting where you got ideas and code from is the least you can do. While LLMs just launder code as completely your own.


> I feel like this wording isn't great when there are many impactful open source programmers who have explicitly stated that they don't want their code used to train these models

That’s been the fate of many creators since the dawn of time. Kafka explicitly stated that he wanted his works to be burned after his death. So when you’re reading about Gregor’s awkward interactions with his sister, you’re literally consuming the private thoughts of a stranger who stated plainly that he didn’t want them shared with anyone.

Yet people still talk about Kafka’s “contribution to literature” as if it were otherwise, with most never even bothering to ask themselves whether they should be reading that stuff at all.


If he didn't want us to read Metamorphosis he probably shouldn't have had it published. It was, long before his death.

But it's true much of his work was unpublished when he died and was "rescued" or "stolen", depending on what narrative you prefer.


I don't think it's possible to separate any open source contribution from the ones that came before it, as we're all standing on the shoulders of giants. Every developer learns from their predecessors and adapts patterns and code from existing projects.


Exactly that. And all the books about, for instance, operating systems, totally based on the work of others: their ideas where collected and documented, the exact algorithms, and so forth. All the human culture worked this way. Moreover there is a strong pattern of the most prolific / known open source developers being NOT against the fact that their code was used for training: they can't talk for everybody but it is a signal that for many this use is within the scope of making source code available.


> their ideas where collected and documented

Yeah, documented *and credited*. I'm not against the idea of disseminating knowledge, and even with my misgivings about LLMs, I wouldn't have said anything if this blog post was simply "LLMs are really useful".

My comment was in response to you essentially saying "all the criticisms of LLMs aren't real, and you should be uncompromisingly proud about using them".

> Moreover there is a strong pattern of the most prolific / known open source developers being NOT against the fact that their code was used for training

I think it's easy to get "echo-chambered" by who you follow online with this, my experience has been the opposite, i don't think it's clear what the reality is.


If you fork an open source project and nuke the git history, that's considered to be a "dick move" because you are erasing the record of people's contributions.

LLMs are doing this on an industrial scale.


I don't really understand how that isn't allowed/disallowed simply on the basis of whether the licence permits use without attribution?


The hard truth is that if you're big enough (and the original creator is small enough) you can just do whatever you want and to hell with what any license says about it.


To my understanding, the expensive lawyers hired by the biggest people around, filtered through layers of bureaucracy and translated to software teams, still result in companies mostly avoiding GPL code.


I’ve been thinking that information provenance would be very useful for LLMs. Not just for attribution (git authors), but the LLM would know (and be able to control) which outputs are derived from reliable sources (e.g. Wikipedia vs a Reddit post; also which outputs are derived from ideologically-aligned sources, which would make LLMs more personal and subjectively better, but also easier to bias and generate deliberate misinformation).

“Information provenance” could (and I think most likely would, although I’m very unfamiliar with LLM internals) be which sources most plausibly derive an output, so even output that exists today could eventually get proper attribution.

At least today if you know something’s origin, and it’s both obvious and publicly online, you have proof via the Internet Archive.


You can say that about literally everything, yet we have robust systems for protecting intellectual property, anyway.


> I don't think it's possible to separate any open source contribution from the ones that came before it, as we're all standing on the shoulders of giants. Every developer learns from their predecessors and adapts patterns and code from existing projects.

Yes but you can also ask the developer (wheter in libera.irc, or say if its a foss project on any foss talk, about which books and blogs they followed for code patterns & inspirations & just talk to them)

I do feel like some aspects of this are gonna get eaten away by the black box if we do spec-development imo.


> there are many impactful open source programmers who have explicitly stated that they don't want their code used to train these models and licensed their work in a world where LLMs didn't exist. It wasn't their "gift", it was unwillingly taken from them.

There are subtle legal differences between "free open source" licensing and putting things in the public domain. If you use an open source license, you could forbid LLM training (in licensing law, contrary to all other areas of law, anything that is not granted to licensees is forbidden). Then you can take the big guys (MSFT, Meta, OpenAI, Google) to court if you can demonstrate they violated your terms.

If you place your software into the public domain, any use is fair, including ways to exploit the code or its derivatives not invented at the time of release.

Curiosly, doesn't the GPL even imply that if you pre-tain an LLM with GPLed code and use it to generate code (Claude Code etc.) that all generated code -- as derived intellectual property that it clearly is -- must also be open sourced as per GPL terms? (It would seem in the spirit of the licensors.) Haven't seen this raised or discussed anywhere yet.


> If you use an open source license, you could forbid LLM training

Established OSS licenses are all from before anyone imagined that LLMs would come into existence, let alone train on and then generate code. Discrimination on purpose is counter to OSI principles (https://opensource.org/osd):

> 6. No Discrimination Against Fields of Endeavor

> The license must not restrict anyone from making use of the program in a specific field of endeavor. For example, it may not restrict the program from being used in a business, or from being used for genetic research.

The GPL argument you describe hinges on making the legal case that LLMs produce "derived works". When the output can't be clearly traced to source input (even the system itself doesn't know how) it becomes rather difficult to argue that in court.


You pre suppose that output is derive work (not a given) and that training is not fair use (also not a given).

If the courts decide to apply the law as you assume the AI companies are all dead. But they are all betting that's not going to be the case. And since so much of the industry is taking the bet with them... The courts will take that into account


If you publish your code to others under permissive licenses, people using it to do things you do not want is not something being unwillingly taken from you.

You can do whatever you want with a gift. Once you release your code as free software, it is no longer yours. Your opinions about what is done with it are irrelevant.


But the license terms state under which conditions the code is released.

For example: MIT license states has this clause "The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software."

It stands to reason that if an LLM outputs something based on MIT-licensed code then that output should at least contain that copyright because it's what the original author wished.

And I saw a comment below arguing that knowledge cannot be copyrighted, but the code is an expression of that knowledge and that most certainly can be protected by copyright.


Intellectual property is not absolute and can be expropriated, just like any other property.


"Expropriated" usually means a government order, though. Do LLMs have one?


when you inplement a quick sort, do you credit Hoare in the comments?


No, in the same way that I wouldn't cite Euler every time I used one of his theorems - because it's so well known that its history is well documented in countless places.

However, if I was using a more recent/niche/unknown theorem, it would absolutely be considered bad practice not to cite where I got it from.


If I was implementing any known (named) algorithm intentionally I think I would absolutely say so in a comment (`// here we use quick sort to...` and maybe why it's the choice) and then it's easy for someone to look up and see it's due to Hoare or whoever on Wikipedia etc.


Now many will downvote you because this is an algorithm and not some code. But the reality is that programming is in large part built looking at somebody else code / techniques, internalizing them, and reproducing them again with changes. So actually it works like that for code as well.


> It wasn't their "gift", it was unwillingly taken from them.

Yes. Exactly. As a developer in that case I feel almost violated in my trust in “the internet.” Well it’s even worse, I did not really trust it, but did not think it could be that bad.


I don't understand this perspective. Programmers often scoff at most other examples of intellectual property, some throwing it out all together. I remember reading Google vs Oracle where Oracle sued Google for stealing code to perform a range check, about about 9 lines long, used to check array index bounds.

I guess the difference is AI companies bad? This is transformative technology creating trillions in value and democratizing information, all subsidized by VC money. Why would anyone in open source who claims to have noble causes be against this? Because their repo will no longer get stars? Because no one will read their asinine stack overflow answer?

https://en.wikipedia.org/wiki/Google_LLC_v._Oracle_America,_....


Hot take: The Supreme Court should have sided with Oracle. APIs are a clear example of unique expression, and there is no statute exempting them specifically from copyright protection. If they are not protected by copyright, is anything really? What meaning has copyright law then?


Why is copyright law more important than anything else? AI is likely to drive the next stage of humanity's intellectual evolution, while copyright is a leaky legal abstraction that we pulled out of our asses a couple hundred years ago.

One of these is much more important than the other. If the copyright cartels insist on fighting AI, then they must lose decisively.


Hot take: Intellectual property law is stifling innovation and humanity would be better served scrapping it.


HDR videos and games (both native and proton) work in both KDE and Gnome (and supposedly Sway and Hyprland, but I haven't tried either). I think support in KDE/Gnome landed in a stable release ~6 months ago.

The HDR experience on KDE is about as good as the Windows one. Last time I tried Gnome, there was no way to configure SDR and HDR brightness separately, but it was definitely still usable.


The problem was not only in KDE but also in NVIDIA drivers iiuc. For my setting HDR has been stably working on KDE since early 2025.


First time round, Trump would consistently say lots of worrying stuff, but people in the US administration would stop him from following through.

This time, it's become quickly evident that he is following through.

The sentiment in Europe has changed from "well this isn't ideal, but we can just wait it out" to "this is scary and existential, we need self-sufficiency as soon as possible"


> Every operating system is in US hands

Desktop Linux is (becoming) usable for a normal person just in time, I was surprised how easily a non-technical friend switched over to Bazzite (immutable fedora with gaming extras).

> Visa, Mastercard, Paypal

The EU has already been working on a "Digital Euro" for a while

> all social media commonly used

I'm hoping more decentralized social media continues to pick up steam


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: