Hacker News .hnnew | past | comments | ask | show | jobs | submit | simonask's commentslogin

I really like Ruby. It had a formative impact on my young programmer self, particularly the culture. So much joyful whimsy.

But like... something like a font renderer in Ruby? The thing that is incredibly cache sensitive and gets run millions of times per day on a single machine? The by far slowest step of rendering any non-monospaced UI?

The Earth is weeping my brother.


It doesn't typically get run millions of times per day because in most regular uses it's trivial to cache the glyphs. I use it for my terminal, and it's not in the hot path at all for rendering, as its only run the first time any glyph is rendered at a new size. If you want to add hinting and ligatures etc., it complicates the caching, but I have no interest in that for my use, and then it turns out rendering TrueType fonts is really easy:

https://github.com/vidarh/skrift

(Note that this is a port of the C-based renderer libschrift; the Ruby version is smaller, but much less so than "usual" when converting C code - libscrift itself is very compact)


I don’t use any KDE apps, but the Plasma desktop has been absolutely rock solid and super performant for me.

I do think that the idea that each toolkit has its own native app for each thing you might want to do with a computer is a recipe for a forest of half-maintained nearly-good apps. A lot of the KDE and GNOME app suites feel like checking boxes.


The author's name is Kamilla. She was born in 2004 (according to the article).

The idea that something can simultaneously be "woke [and] leftist" and somehow still defined by its attachments to corporations is a baffling expression of how detached from reality the US political discourse is.

The rest of the world looks on in wonder at both sides of this.


"I hate corporations and I hate leftists, ergo they must be the same thing"

Please tell me this also means that they are redirecting the expenses currently going to Microsoft into funding open source development?

C# has a unique powerful position in the video game space. In almost every other niche, there are better (or just trendier) solutions, but C# is the only major language that actually gives you a combination of features that are important in video games:

- Dynamic runtime with loose coupling and hot reload of code - extremely useful during development.

- Value types. You don't want every Vector4 to be heap allocated when you're doing 3D graphics, because that's going to be absolutely unusable.

- Access to a relatively low-level substrate for basically-native performance when needed - extremely useful when you're trying to actually ship something that runs well.

Taken in isolation, C# isn't best in class for any of them, but no other language offers all three, especially not if you also want things like a really good debugger and great IDE tools.

To my knowledge, Java has none of these features (yet), and they aren't really important in a lot of the areas where Java is popular. But this is why C# in particular is very strong in the video games niche.


> no other language offers all three

Julia. Of course with the added downside that it's not deployable (asterisk here), which is somewhat important for games. IDE and debugger could be better, but at least it doesn't insist on classes like C#.


I think these are all valid arguments, but I do want to point out that Java is addressing them.

The first bullet is possible with the JetBrainsRuntime, a fork of OpenJDK: https://github.com/JetBrains/JetBrainsRuntime

The second bullet is a headline goal of Project Valhalla, however it is unlikely to be delivered in quite the way that a C# (or Go or Rust etc.) developer might expect. The ideal version would allow any object with purely value semantics [1] to be eligible for heap flattening [2] and/or scalarization [3], but in experimental builds that are currently available, the objects must be from a class marked with the "value" qualifier; importantly, this is considered an optimization and not a guarantee. More details: https://openjdk.org/projects/valhalla/value-objects

The third bullet (IIUC) is addressed with the Foreign Function & Memory API, though I'll admit what I've played around with so far is not nearly as slick as P/Invoke. See e.g. https://openjdk.org/jeps/454

[1] value semantics means: the object is never on either side of an == or != comparison; the equals and hashCode methods are never called, or are overridden and their implementation doesn't rely on object identity; no methods are marked synchronized and the object is never the target of a synchronized block; the wait, notify, and notifyAll methods are never called; the finalize method is not overridden and no cleaner is registered for the object; no phantom or weak references are taken of the object; and probably some other things I can't think of

[2] heap flattening means that an object's representation when stored in another object's field or in an array is reduced to just the object's own fields, removing the overhead from storing references to its class and monitor lock

[3] scalarization means that an object's fields would be stored directly on the stack and passed directly through registers


The third bullet is also presumably referring to C#'s ancient wider support for unsafe { } blocks for low level pointer math as well as the modern tools Span<T> and Memory<T> which are GC-safe low level memory management/access/pointer math tools in modern .NET. Span<T>/Memory<T> is a bit like a modest partial implementation of Rust's borrowing mechanics without changing a lot of how .NET's stack and heap work or compromising as much on .NET's bounds checking guarantees through an interesting dance of C# compiler smarts and .NET JIT smarts.


The FFM API actually does cover a lot of the same ground, albeit with far worse ergonomics IMO. To wit,

- There is no unsafe block, instead certain operations are "restricted", which currently causes them to emit warnings that can be suppressed on a per-module basis; it seems the warnings will turn into exceptions in the future

- There is no "fixed" statement and frankly nothing like it all, native code is just not allowed to access managed memory period; instead, you set up an arena to be shared between managed and native code

- MemorySegment is kinda like Memory<T>/Span<T> but harder to actually use because Java's type-erased generics are useless here

- Setting up a MemoryLayout to describe a struct is just not as nice as slapping layout attributes on an actual struct

- Working with VarHandle is just way more verbose than working with pointers


> - There is no unsafe block, instead certain operations are "restricted", which currently causes them to emit warnings that can be suppressed on a per-module basis; it seems the warnings will turn into exceptions in the future

Which sounds funny because C# effectively has gone the other direction. .NET's Code Access Security (CAS) used to heavily penalize unsafe blocks (and unchecked blocks, another relative that C# has that I don't think has a direct Java equivalent), limiting how libraries could use such blocks without extra mandatory code signing and permissions, throwing all sorts of weird runtime exceptions in CAS environments with slightly wrong permissions. CAS is mostly gone today so most C# developers only ever really experience compiler warnings and warnings-as-errors when trying to use unsafe (and/or unchecked) blocks. More libraries can use it for low level things than used to. (But also fewer libraries need to now than used to, thanks to Memory<T>/Span<T>.)

> There is no "fixed" statement and frankly nothing like it all, native code is just not allowed to access managed memory period; instead, you set up an arena to be shared between managed and native code

Yeah, this seems to be an area that .NET has a lot of strengths in. Not just the fixed keyword, but also a direct API for GC pinning/unpinning/locking and many sorts of "Unsafe Marshalling" tools to provide direct access to pointers into managed memory for native code. (Named "Unsafe" in this case because they warrant careful consideration before using them, not because they rely on unsafe blocks of code.)

> MemorySegment is kinda like Memory<T>/Span<T> but harder to actually use because Java's type-erased generics are useless here

It's the ease of use that really makes Memory<T>/Span<T> shine. It's a lot more generally useful throughout the .NET ecosystem (beyond just "foreign function interfaces") to the point where a large swathe of the BCL (Base Class Library; standard library) uses Span<T> in one fashion or another for easy performance improvements (especially with the C# compiler quietly preferring Span<T>/ReadOnlySpan<T> overloads of functions over almost any other data type, when available). Span<T> has been a "quiet performance revolution" under the hood of a lot of core libraries in .NET, especially just about anything involving string searching, parsing, or manipulation. Almost none of those gains have anything to do with calling into native code and many of those performance gains have also been achieved by eliminating native code (and the overhead of transitions to/from it) by moving performance-optimized algorithms that were easier to do unsafely in native code into "safe" C#.

It's really cool what has been going on with Span<T>. It's really wild some of the micro-benchmarks of before/after Span<T> migrations.

Related to the overall topic, it's said Span<T> is one of the reasons Unity wants to push faster to modern .NET, but Unity still has a ways to go to where it uses enough of the .NET coreclr memory model to take real advantage of it.


Yeah, coming to C# from Rust (in a project using both), I’ve been extremely impressed by the capabilities of Span<T> and friends.

I’m finding that a lot of code that would traditionally need to be implemented in C++ or Rust can now be implemented in C# at no or very little performance cost.

I’m still using Rust for certain areas where the C# type system is too limited, or where the borrow checker is a godsend, but the cooperation between these languages is really smooth.


The inconvenient truth here is also that following the system theme is an anti-feature for most apps. On the desktop, you want your app window to be recognizable at a glance, meaning the primary color should be the brand color, etc.

I currently have open Chrome, Spotify, Discord, Aseprite, and Zed. All of them look completely different, and that's actively helpful for me, the user.

It's nice to follow the system's light/dark setting, and obviously the behavior of basic UI controls should be unsurprising, but beyond that there's no point in "consistency".


This. So often features to use system colors can cause apps to be unreadable or just look like crap. The first time I get a bug report that people can't read something I will lock the colors the down, I just don't have time for that.

Who says the system theme is well designed at all? Back in the 1980s you could count on most text color combinations on a Commodore 64 or an IBM 3279 or a PC with a CGA working.

Today it is absolutely normal to type

   ls
on a Linux machine out of the box and if you are running X or Wayland some of the file names are dark blue on a black background and completely un-readable. To be fair, if you are logging into a Linux machine on Windows with ssh on CMD.EXE or most terminal software you get similarly poorly chosen colors. (To be fair, MacOS does do better!)

As a web developer it pisses me off because I am expected to follow

https://www.w3.org/WAI/standards-guidelines/wcag/

and regularly my management gets legalistic looking documents from customers complaining that we only have 6.5:1 contrast on something and you know what I do... I fix it. I wouldn't send anything to my tester that was unreadable and if I did I'd expect her to put in a ticket and I would... fix it. When MUI computes the coordinates wrong and something draws 20px right of where it should be... I fix it.

Whenever I've put similar tickets to the various parts of the Linux desktop mafia they close it as "won't fix" and often give me a helping of verbal abuse. Even Microsoft occasionally fixes something (even if half a decade late) and their people are polite.


How is bad UX in the YouTube app Apple's fault? You can mess up Android's back button as well.


Because Apple does not enforce uniformity


If Ah, yes, problems with the UI on the OS which for 99% consists of modals is not the OS vendor problem, noted.


FWIW, x86 has always been a pretty moving target with many instruction set extensions, especially for various SIMD features. But even something fundamental like `popcnt` has a separate CPUID flag, even though Intel considers it part of SSE4.2.

Targeting the broadest possible variant of x86-64 limits you to SSE2, which is really not very capable outside of fairly basic float32 linear algebra. Great for video games, but not much else.

Also keep in mind that .NET originated right at the cusp of x86-64, which again is a whole different architecture from its 32-bit predecessor. Most native apps used to ship separate binaries for years.

And of course, I think Microsoft was aware of their intrinsic dependency on other companies, especially Intel. I can see how the promise of independence was and is enticing. They also weren't interested in another dependency on Sun/Oracle/whoever maintains Java at the moment. While Windows on ARM64 is still in a weird spot, things like .NET are useful in that transition.

Lastly, the CLR is different from the JVM in a number of interesting ways. The desktop experience with the JVM is not great, and Java is a very bad language. It makes sense to do your own thing if you're Microsoft or Apple.


I doubt that .NET was meant to solve the problem of underlying variation in instruction set of Intel processors because that concern does not exists for 98% of the applications anyway they rarely have to look for the compiler settings and for the rest of the 2%, the compiler flags are huge set of options and that kind of tweaks are NOT available for .NET anyway.

Additionally, such applications that want to exploit certain underlying processor's instruction set have no way to do so without detecting CPUID and landing into so called "unmanaged code" because .NET is all about very high level IR that even has object oriented features as well.


The .NET JIT compiler absolutely does query CPUID flags and generates different optimized code depending on available features, as well as the performance profile of each CPU model. This is similar to always passing `-march=native` to GCC.

This can have a huge effect on a wide range of applications, not just those using particular CPU features. For example, each libc implementation typically has a separate implementation `memcpy()` for each set of CPU features.


Okay thank you. I need to read more on the subject the.


The "Performance Improvements in .NET" blog lists the new JIT support for instruction sets each year.

https://devblogs.microsoft.com/dotnet/performance-improvemen...


C# is nice, but it is nowhere near Rust in terms of safety or expressiveness. Thankfully they are finally adding discriminated unions (sum types) and other sorely missing features.

Unsafe in C# is much more dangerous than unsafe in Rust, precisely because it doesn’t actually color a function. It just allows its body to use pointers. This is why you have methods in the CLR called “DangerousFoo()”, and the compiler does nothing to prevent you from calling them.


Rust also has a much steeper learning curve. I can onboard an average developer that's familiar with Typescript and have them be productive in C# in a week.

This one is more subjective, but I also think C# has a more mature and painless web stack.

I love both languages but for me they each fill a different role.


I think this just says that TypeScript and C# are more similar languages than either is to Rust, which I would agree with. Rust's learning curve is not steep at all if you're coming from C or C++.

C# also has a pretty steep learning curve if you have to care about the things where Rust excels, like correctness or maximum efficiency (which typically does not include web stuff). I would even say that Rust is the easiest language in which to approach that level of correctness and efficiency.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: