The "term positions" caveat seems like a major limitation for human oriented searches of logs or products or whatever. I don't see it mentioned in what's next, will you address it in some future release or is it out of scope for your intended use cases?
IIRC they are a generation behind on CPUs, so must need capital to refresh/add new and competitive hardware. The default state at the bigger CSPs is to have this stuff rolling in even before it is generally available to the public.
That early access privilege, economies of scale, and access to cash flow or capital paints a somewhat dire picture for DO even ignoring "AI", although the AI boom has made physical infrastructure much more difficult to do at scale if you aren't already doing physical infra at scale.
This is the kind of low quality information you see on fanboy forums. There is nothing special about Linux drivers and anyone can go look at them. A lot of hardware uses a HAL and there is a smaller OS adaption therefore most of the code is similar across OSes.
Virtualization means you now have multiple layers of drivers and privileged code in the mix to add and amplify bugs, it can and should work but if you are doing this in the name of stability that is a bit curious.
The reason Netflix can do what they do is they have good relationship with their HW vendors, NVIDIA(Mellanox) and Chelsio. If they were on Linux, they'd need the same level of support.
Did you read the same article as me? The word is singularly used in the context of how do you earn money as a project, i.e. sustain the effort. It's a bit of a leap to imply this is impure unless they made some contract stating the opposite or are doing something dark.
Timing. The 68k still had legs, i.e. the 68040 provided great drop in performance and had an enormous ecosystem and economies of scale. By the time the RISC wars were starting to get fever pitched, the POWER architecture and AIM alliance seemed like a blessing to combine ecosystems and economies of scale for the A and M constituents. And it was.. successful product lines for 2-3 decades from all sorts of embedded systems to G5 workstations to spacecraft.
This is both ambitious and seemingly not intractable which is a rare goldilocks combination.
As some contrast, consider something like GNUStep. You are never going to get macOS out of GNUStep, no matter how hard you try, because it is too high level (Cocoa) while simultaneously too ambitious. Similarly, with alternate kernels like ReactOS you will never get full replacement of Windows because it is too ambitious and intractable.
The intersection of this project though, it is a cunning insight in using the hardware support of Linux and shedding the graphics layer for something a lot simpler with a minimal kernel module to support the existing mechanics of BeOS. This is more in line with wine, which is and has been useful for a long time, but is even easier. This doesn't mean it will achieve massive user base, but it seems like it will mature fast enough into something dedicated fans can enjoy and use productively.
Emulating the RPI PIOs instead of the TI PRUs is really a miss.
The PRUs really get a bunch right. Very specifically, the ability to broadside dump the ENTIRE register file in a single cycle from one PRU to the other is gigantic. It's the single thing that allows you to transition the data from a hard real-time domain to a soft real-time domain and enables things like the industrial Ethernet protocols or the BeagleLogic, for example.
Tooling for the RPI PIO design is probably a bit more accessible than the TI PRU situation. I'd say its not really a miss - more of a necessity given bennies' proclivity towards open/available tools. Getting access to architecture details of the TI PRU would necessitate an NDA, would it not?
> Getting access to architecture details of the TI PRU would necessitate an NDA, would it not?
Nope. All the information is right in the publicly available architecture manuals. However, you don't need to copy the PRUs, per se. All this can be done with RISC-V.
The important parts are deterministic execution, the register file sideload between paired processors, and, possibly, single cycle instruction execution. None of these are precluded by using RISC-V.
And, given how large his PIO stuff is, I'd argue it would be better to do this with RISC-V.
I always wonder why Ubuntu is even on the radar anymore. It is a pile of questionable decisions with a billionaire ego bus factor. If you like apt, just use Debian. sid is fine for desktops if you are moderately technical.
The biggest thing that has prevented me from switching prod systems to Debian is that the window for updates is fairly small, at around a year. 13 came out Aug 9, 2025, and 12 goes EOL June 10, 2026. Compared to Ubuntu 24.04 coming out in April 2024, and 22.04 goes EOL in May 2027 (a year after 24.04). So Ubuntu covers 2 releases plus a year.
I know a lot of people feel like this isn't a big deal, but even with Ansible it can be hard to get our fleet of a few hundred machines all upgraded in a year window, being already busy. Some of them are easy, of course, but there are some that take significant time and also involve developer work, etc...
Don't get me wrong, I think Debian is great. But in the data center, there's definitely a case for a longer support window, and I like that about Ubuntu. RHEL is even better for that, but it is very nice that Ubuntu free and Ubuntu commercial are the same, but with RHEL there is that split to CentOS being the free one (haven't used RHELs in quite a while, obviously).
Except their LTS is a lie or maybe plausible deniability for businesses that DGAF. They have no idea what they are doing with backports and lack thereof. And if you aren't paying you aren't even receiving many of the updates.
> And if you aren't paying you aren't even receiving many of the updates.
Are you sure you didn't mean RedHat? Last I checked there's no requirement to pay anything in order to use an LTS release of Ubuntu. Even if you go with Pro to get those extra years of Extended Support (to make it ~12 years?) you still get up to 5 licenses for personal use. No money asked, no *BS* subscription model. Isn't that more than enough any non-commercial user?
Read https://ubuntu.com/security/esm carefully. The chance of running everything out of 'main' is close to zero. I am shocked by how little people understand this.
Main is all you need to set up a working system and deploy services. Much like BaseOS in RHEL you get full support for those packages for 5+5 years. With snaps you effectively get rolling releases of LXD, microk8s, openstack, docker and other relevant things. What else do you need? Seriously, how come this isn't enough for a non commercial user?
Because this is Stockholm syndrome, better community options prevail, main is not all people deploy and is not the only repo default enabled. openstack, docker are legacy tech, never encountered anyone using LXD or microk8s thankfully I'll steer clear of that snap garbage barge.
However, I've been extremely happy with Devuan. It is Debian minus some bad decisions the Ubuntu voting block forced upstream (for instance, there's no systemd).
P.S. been shipping it for a while https://www.freshports.org/databases/pg_textsearch/ :)
reply