Audio ultimately leads to a 1D data track. Ways to play that track is comparatively well understood, and well abstracted from the workload itself. We've been using 3.5mm jack at scale since the original Walkman, for example. That's 1979.
Graphics on the other hand constantly changes. Apple II around the same timeframe had 280×192 resolution -not even 4:3 aspect ratio- and 16 color that only existed because of a hack in the NTSC spec. Now we have dozen different common aspect ratios, 2 common sizes that we have to support (mobile and monitor), dozens of different resolutions, running on thousands of different hardware, that are near impossible to test the combination of their configurations, let alone the variety of behavior that comes from their software - the list is just endless.
Audio is played by speakers, there isn't much variance. Graphics on the other hand changes constantly, and most of those changes result in throwing out a lot of software. The situation is so incredibly bad that most cross platform products threw the towel and resorted to using perhaps the second most complicated software humanity ever devised - the browser - instead of trying to get a window and a button to work on 2 different operating systems.
Do people here really not know anything about surround sound, and multichannel sound in general? Apple's new surround support? Gaming use of Ambisonics? Film scoring? Do they really not understand how you need to mix differently for phone playback, car playback and "stereo system" playback (or at least pick one because that's the best you can do) ?
I thought it would be easy to understand the difference between x channels that play 1D data (that remained unchanged for half a century almost) and going from 50 thousand pixels to 15 million, with dozens of different sizes and aspect ratios would be readily apparent. Alas, I was mistaken.
I don't pretend to know much, if anything, about the intricacies of video and graphics programming. It would be nice if people who think that audio is "x channels that play 1D data (that remained unchanged for half a century almost)" could return the same courtesy towards audio.
5.1 surround was invented in 1987. Sound mixing as we know it did not change in the same way graphics did. One should not expect courtesy if they can't offer it in the first place.
> CRPGs have done nonlinear gameplay narratives for decades at least.
CRPGs have done the illusion of nonlinear gameplay narratives for decades. Baldur's Gate 3 simply has a vastly more polished version of it, rather than paying lip service like recent, mainline RPGs - anything from Bethesda for example.
I use firefox because it has the most hassle-free hardware decoding in linux. However, everything basically feels better with Brave, even with the same amount of plug-ins.
Curious whether you've tried with the new (non-PPA) repo directly from Mozilla as of v122 [1]. I think the old PPA was also Mozilla, so I don't know what may have changed aside from being more publicly acknowledged. Might be worth a try?
I don't have an Ubuntu VM at-hand but on Debian bookworm it installed fine, and (after tweaking one line in profiles.ini to point to my old ESR profile) it loaded and played Widevine-protected videos without any issues.
I have the same card, the above works with everything I threw at it so far. Haven't even installed amdgpu-pro drivers btw, only have radv (that steam installed by default).
> that the complexity of the Vulkan is like 5x more than OpenGL for… 5% better performance?
The complexity of Vulkan can (and in naive cases, always) slow things down in comparison to OpenGL. What you get with Vulkan isn't +X% more performance, but consistent performance.
Both OpenGL and DirectX already did all of the things that you need to do with Vulkan/DX12. The difference is that drivers were black boxes back then, and everything worked with heuristics. A relatively minor change could evict you from the fast path into the oblivion. You had to blindly figure your way out, or if you were "big enough" you could contact the driver team, at which point you would enter the world of GPU politics. GPU mafia was/is a real thing.
Vulkan cuts straight through that. Yes, synchronization is hard, but it is way harder to figure out when the driver arbitrarily inserts a gigabarrier and when it doesn't. Even with Vulkan/DX12 you still encounter these issues, but at least with these latest APIs you can reason about things, and be generally correct.
It was never about more performance. It was always about consistent performance.
> New Dune however feels more like Young Adult Entertainment.
Paul Atreides (the main character) is 15 years old in Dune.
Most people that read and revered Dune probably did so during their young adult years.
I say this as someone that loves the Herbert's works, but it is really apparent that the first Dune book originated from an ecological article, and mushrooms (of the psychedelic kind).
There's something to be said about philosophy of simplicity in C. However, C pretty clearly evolved into the opposite direction. This is nearly all due to compiler developers, and the fact that C has to cater to so many different hardware requirements.
Unlike C++, ISO C is nothing more than culmination of features that more than 1 compiler has implemented (and doesn't interrupt the compilation process of a micro-controller firmware that was released literally 40+ years ago). Anything else, is GNU C. And it is so incredibly complex and obtuse at times that clang still can't compile glibc after years of work.
Zig was not created with the same spirit that created and evolved C. Zig was created with the idea of a simple C, one that does not match reality, and frankly leans more on Go rather than C. Zig, Odin, V, nearly all these better-C languages are more inspired by Go itself, than what C actually is. What they want from C is just the performance; that's why they're so focused on manual memory management one way or another.
Zig borrows some ideas from go. Probably defer is the big one, but if you watch "the road to zig 1.0" you will understand that zig is not really a go derivate. Most things in zig are directly addressing issues in c.
If you squint zig's error return fusion looks a bit like go's tuple error return but it actually is more "first-classing certain c conventions" than "adopting a go pattern". Same goes for slices.
1. It only allows function calls instead of any expression.
2. It allocates memory dynamically and attaches the function call expression to the function, rather than the current scope exit. This has surprising and harmful consequences if you use it inside a loop.
So, I wouldn't say that zig's defer is borrowed from Go.
Most of C's issues were directly addressed in Go as well. Only, Go did away with manual memory management.
C never had the philosophy of keeping things simple through the years. If it did, we would not have time traveling UBs to begin with. The lauded simplicity and explicitness comes directly from Go, where the philosophy was crystalized and preserved very early on.
You might say it is semantics, to call improving upon C being a derivative of Go (with manual memory management). You would be partially correct, it is semantics, but one that holds up very well if you look at how languages developed over the decades.
DOTS isn't ECS. DOTS was supposed to be, a Data Oriented Tech Stack. Currently it only has ECS with some extra bits. Ergo, DOTS is an unfinished mess. And that unfinished mess --along with a mandate to release for Gamepass-- culminated into CSII suffering from utterly ridiculous issues.
They were already entrenched with their first game, and used that opportunity to buy into the DOTS hype.
DOTS had a much troubled development cycle; most of what was promised was eventually abandoned, and the key people were laid off. Anyone that bought into it is now stuck in the hybrid-DOTS limbo, where they have to invest substantial amount of time into making the engine features work.
Graphics on the other hand constantly changes. Apple II around the same timeframe had 280×192 resolution -not even 4:3 aspect ratio- and 16 color that only existed because of a hack in the NTSC spec. Now we have dozen different common aspect ratios, 2 common sizes that we have to support (mobile and monitor), dozens of different resolutions, running on thousands of different hardware, that are near impossible to test the combination of their configurations, let alone the variety of behavior that comes from their software - the list is just endless.
Audio is played by speakers, there isn't much variance. Graphics on the other hand changes constantly, and most of those changes result in throwing out a lot of software. The situation is so incredibly bad that most cross platform products threw the towel and resorted to using perhaps the second most complicated software humanity ever devised - the browser - instead of trying to get a window and a button to work on 2 different operating systems.
And don't even get me started on prints :)