Previous owner can start the same business immediately and poach all the clients, reducing the value of the sold business to zero. Buyers obviously anticipate this and won't buy the business without the non-compete.
There's also a big difference between starting a competing business like your example, and being barred from say working on "cloud infrastructure" because your previous employer also worked on "cloud infrastructure". It can be blurry for executives, but in general noncompetes seem to be used to push pay down more than for any legitimate business purpose.
How many compatibility issues is MacOS realistically expected to spur? Windows DX felt unusable to me without a Linux VM (and later WSL), but on MacOS most tooling just kinda seems to work the same.
It’s not the tooling for me, macOS is just bad as a server OS for many reasons. Weird collisions with desktop security features, aggressive power saving that you have to fight against, root not being allowed to do root stuff, no sane package management, no OOB management, ultra slow OS updates, and generally but most importantly: the UNIX underbelly of macOS has clearly not been a priority for a long time and is rotting with weird inconsistent and undocumented behaviour all over the place.
Linux is not immune to BIOS/UEFI firmware attacks either. Secure Boot, TPM, and LUKS can work well together, but you still depend on proprietary firmware that you do not fully control. LogoFAIL is a good example of that risk, especially in an evil maid scenario involving temporary physical access. I think Apple has tighter control over this layer.
You completely misunderstood the quoted remark you responded to. The desktop security features in MacOS that interfere with unblessed binaries and libraries loading is a huge pain in the ass, especially for headless server use.
For server usage? macOS is the least-supported OS in terms of filesystems, hardware and software. It uses multiple gigabytes of memory to load unnecessary user runtime dependencies, wastes hard drive space on statically-linked binaries, and regularly breaks package management on system upgrades.
At a certain point, even WSL becomes a more viable deployment platform.
Provisioning, remote management, containers, virtualization, networking, graphics (and compute), storage, all very different on Mac. The real question is what you would expect to be the same.
I tried to use SRIOV to virtualize mellanox nics with vlans on redhat Linux. Long story short it did not work. Per Nvidia the os has to also run open switch. This work was on an already complex setup in finance ... so adding open switch was considered too much additionally complexity. This requirement is not something I run across in the docs.
The situation in networking is a lot different than graphics. I don't know much other than that it depends on what specific protocol, card, firmware, and network topology you're using and there's not really generic advice. If the question is setting up Ethernet switching inside the card so VFs can talk to the network, then I think the Linux switchdev tools can configure that on their own without Open vSwitch but you probably need to find someone who understands your specific type of deployment for better advice.
Depending what you're doing AMD's support for VirtIO Native Context might be a useful alternative (I think it gives less isolation which could be good or bad depending on use).
In my experience the behavior variation between models and providers is different enough that the "one-line swap" idea is only true for the simplest cases. I agree the prompt lifecycle is the same as code though. The compromise I'm at currently is to use text templates checked in with the rest of the code (Handlebars but it doesn't really matter) and enforce some structure with a wrapper that takes as inputs the template name + context data + output schema + target model, and internally papers over the behavioral differences I'm ok with ignoring.
Model testing and swapping is one of the surprises people really appreciate DSPy for.
You're right: prompts are overfit to models. You can't just change the provider or target and know that you're giving it a fair shake. But if you have eval data and have been using a prompt optimizer with DSPy, you can try models with the one-line change followed by rerunning the prompt optimizer.
Dropbox just published a case study where they talk about this:
> At the same time, this experiment reinforced another benefit of the approach: iteration speed. Although gemma-3-12b was ultimately too weak for our highest-quality production judge paths, DSPy allowed us to reach that conclusion quickly and with measurable evidence. Instead of prolonged debate or manual trial and error, we could test the model directly against our evaluation framework and make a confident decision.
It's not just about fitting prompts to models, it's things like how web search works, how structured outputs are handled, various knobs like level of reasoning effort, etc. I don't think the DSPy approach is bad but it doesn't really solve those issues.
Having done some mobile development where app sandboxes have been prevalent for years, it's annoying to deal with but necessary. Given the bad behavior some devs attempt, often ad SDKs trying to perma-cookie users, stealing clipboards, etc, having a platform that can support app isolation seems necessary for normal desktop usage.
Having some kind of access control list or other method of enforcing access rights for windows and clipboards is definitely a good thing.
However, such a thing could be relatively easily added to X11 without changing the X protocol, so this does not appear as a sufficient motivation for the existence of Wayland.
I have not tried Wayland yet, because I have never heard anyone describing an important enough advantage of Wayland, while it definitely has disadvantages, like not being network transparent, which is an X11 feature that I use.
Therefore, I do not know which is the truth, but from the complaints that I have heard the problem seems to be that in Wayland it is not simple to control the access rights to windows and clipboards.
Yes, access to those must be restricted, but it must be very easy for users to specify when to share windows with someone else or between their own applications. The complaints about Wayland indicate that this mechanism of how to allow sharing has not been thought well. It should have been something as easy as clicking a set of windows to specify something like the first being allowed to access the others, or like each of them being able to access all the others.
This should have been a major consideration when designing access control and it appears that a lot of such essential requirements have been overlooked when Wayland was designed and they had to be patched somehow later, which does not inspire confidence in the quality of the design.
Having a medium understanding of graphics hardware and software stack, and being an everyday desktop Linux user recently, it's hard to square these kinds of complaints with the actual technical situation. Like, people say X11 is network transparent but that's not in practice true. People argue the same problems could be solved in X11, but in practice despite a decade + of complaining about Wayland nobody did the work make the improvements to X. Unlike say the systemd situation Wayland just seems like a better and necessary design?
At a higher level, I've never found someone who is deeply familiar with the Linux GUI software stack who also thinks Wayland is the wrong path, while subjectively as a user most or all of my Linux GUI machines are using Wayland and there's no noticeable difference.
From an app dev perspective, I have a small app I maintain that runs on Mac and Linux with GPU acceleration and at no point did I need to a make any choices related to Wayland vs X.
So, overall, the case that Wayland has some grave technical or strategic flaws just don't pass the smell test. Maybe I'm missing something?
That means that I can run a program, e.g. Firefox, either on my PC or on one of my servers, and I see the same Firefox windows on my display and I am able to use Firefox in the same way, regardless if I run it locally or on a server.
The same with any other program. I cannot do the same with Wayland, which can display only the output of programs that are running on my PC.
This an example of a feature that is irrelevant for those who have a single computer, but there are enough users with multiple computers, for which Wayland is not good enough.
Wayland was designed to satisfy only the needs of a subset of the Linux users. This would have been completely fine, except that now many Linux distributions evolve in a direction where they attempt to force Wayland on everybody, both on those for which Wayland is good enough and on those for which Wayland is not good enough.
I have already passed through a traumatic experience when a gang of incompetents have captured an essential open-source project and they have removed all the features that made that project useful and then they have forced their ideas of what the application should do upon the users. That happened when KDE 3.5 was replaced by KDE 4.
After a disastrous testing of KDE 4 (disastrous not due to bugs but due to intentional design choices incompatible with my needs), I reverted to KDE 3.5 for a couple of years, until the friction needed to keep it has become so great that I was forced to switch to XFCE. At least at that time there was an alternative.
Now, Wayland does not have an alternative, despite not being adequate for everybody. For now, X11 works fine, but since it seems unlikely that Wayland will ever be suitable for me, I am evaluating whether I should maintain a fork of X11 for myself or write a replacement containing only the functionality that I need. That would not be so complex as there are many features of X11 or Wayland that I do not use, so implementing only what I really need might be simple enough. The main application that I do not control would be an Internet browser, like Firefox or Chromium, but that I could run in a VM with Wayland, which would be preferable for security anyway.
In practice for some applications, for my purposes X11 forwarding never really worked (due to GLX and bandwidth among other issues). So that's 50/50 anecdote. I just checked and it appears Wayland adoption has helped push apps to adopt EGL which helps. There are tradeoffs both ways but let's not pretend X11 is adequate for everyone or every application. Subjectively, when I use Gnome now it's by far better than any version of desktop Linux I used a decade or two ago.
I use X over SSH all the time. And as a fun aside, i can run an ancient Linux distro on my 386, and it can use a modern machine as its X display! Not useful, but cool. I've actually done this with modern SBCs as well, and i don't think there's any way to replicate this on Wayland.
Window positioning API seems like the biggest oversight to me, as someone developing a multi window desktop app at work. That and global hotkeys and accessibility.
Due to my reliance on X over SSH i only run Wayland where i strictly need it (my HDR display, namely)
> Like, people say X11 is network transparent but that's not in practice true
Not done it for a while, but ssh into remote machine and start a GUI app used to work for me. It needs one setting in ssh config AFAIK.
> subjectively as a user most or all of my Linux GUI machines are using Wayland and there's no noticeable difference.
Not anymore. There used to be though, and i think it may have got a bad name by being rolled out by distros (especially as the default) before it was ready for most users. I can remember several issues, the worst was with screenshots.
I can accept that as I use a rolling release distro as a daily driver, so you expect some issues, but its not OK if those rough edges hit people using more mainstream distros (which I think they did)
I have used X11 over WAN and it worked fine. Even connecting to a different country, which a significantly slower connection than I have now.
I do not know the current state of RDP etc., but does it allow you to open a single application rather than an entire desktop on Linux, and does it display correctly for the device you are using rather than the one the app is running on?
Only certain Linux RDP clients support per-app windows and I'm not sure how to actually set that up on Linux (only used WinBoat which is a Windows host and does all the Linux config automatically)
> preventing apps from having unrestricted access to screen contents and clipboard
My question was "How often has that been a problem. Is it a vulnerability that has been, or or likely to be, exploited in practice?
I would have thought that filesystem access is the biggest issue, followed by network access. There are solutions for these, of course, but in most cases the default is either unrestricted or what the app asked for.
After years of good experiences I'm pausing buying any more HP hardware. My recent Z series desktop was mis-assembled and customer service getting it resolved was atrocious, so incredibly bad it dissuaded me from even trying a replacement. I don't know what happened over there.
reply