Hacker News .hnnew | past | comments | ask | show | jobs | submit | nine_k's commentslogin

What, not an ML model?

AMD has previously used neural net predictors.

https://www.cs.utexas.edu/~lin/papers/tocs02.pdf


I wonder if it's possible to guide the intonation in any way.

The problem is that these people holding the actual purse don't care enough about their subordinates' experience. They care about the price tag, and about compliance. Apparently the makers of Teams think about the same. None of them thinks in terms of lost productivity.

Yes, compliance is a big one. And it's not so much that they are actively hostile to user productivity (and quality of life), they just don't care enough.

This is the kind of runaway self-improving development that proponents of the singularity keep talking about.

The problem is that training appears to be really slow and expensive. Some quality thinking is required to improve the training approach and the architecture before committing resources to training a new large model. And even the largest models are by now not nearly as good at quality thinking as the best humans.


Scams and "social engineering", as known for a long time, could be a good approximation.

Right, but with scams you trick a human into doing something. With agents, you give them the keys upfront - terminal, file system, API keys - because otherwise what's the point? You can't have an agent that asks permission for every action, you'd just be babysitting it all day. So the question isn't "how do we stop someone from being tricked." It's "how do we secure something that already has root access and runs on vibes instead of logic."

Don't give it root access.

That answer hasn't changed since day one of LLMs, despite some of the thing people are attempting to build these days: If you don't want to get in trouble, don't give LLMs access to anything that can cause actual harm, nor give them autonomy.


Sure, that works today. But Meta is cutting 20% of its workforce. So is everyone else. The whole bet is that agents replace human work - and that only works if they can actually do things. Deploy, access databases, call APIs.

"Don't give it access" is like saying "don't connect to the internet" in 1995. The question isn't whether agents get these permissions. They will. The question is what happens when they do.


Let's see how well it works for them. Apparently Salesforce had been a bit overly enthusiastic about layoffs, and recently had to backtrack.

How do we expect that everything goes all right if we give prod access to a pack of very smart dogs that know some key tricks? Now the same, when humans actually leave the room?

My answer is simple: it just won't be all right this way. The problems will cost the management who drank too much kool-aid; maybe they already do (check out what was happening at Cloudflare recently). Sanity will return, now as a hard-won lesson.


> silently wrap around, producing incorrect and unpredictable

Now I'm more scared to use OpenBSD than I was a minute before.

I strongly prefer software that fails loudly and explicitly.


Yeah, that's pretty appalling.

Regardless of how good the philosophy of something is, if it's as niche and manpower constrained as OpenBSD is then it's going to accumulate problems like this.


I actually think this isn't even surprising from OpenBSD philosophically. They still subscribe to the Unix philosophy of old, moreso than FreeBSD and much much more than Linux.

That is, "worse is better" and it's okay to accept a somewhat leaky abstraction or less helpful diagnostics if it simplifies the implementation.

This is why `ed` doesn't bother to say anything but "?" to erroneous commands. If the user messes up, why should it be the job of the OS to handhold them? Garbage in, garbage out. That attitude may seem out of place today but consider that it came from a time when a program might have one author and 1-20 users, so their time was valued almost equally.


> That attitude may seem out of place today

It absolutely doesn't. Everywhere I've worked we were instructed to give terse error messages to the user. Perhaps not a single "?", but "Oops, something went wrong!" is pretty widespread and equally unhelpful.


Code size would balloon if you try to format verbose error messages. I often look at the binaries of old EPROMs. I notice that 1) the amount of ASCII text is a big fraction of the binary 2) still just categories (“Illegal operation”). For the 1970s, we’re talking user programs that fit in 2K.

I write really verbose diagnostic messages in my modern code.


There was also an implicit saving back then that an error message could be looked up in some other system (typically, a printed manual). You didn't need to write 200 chars to the screen if you could display something much shorter, like SYS-3175, and be confident that the user could look that up in the manual and understand what they're being told and what to do about it.

IBM were experts at this, right up to the OS/2 days. And as machines got more powerful, it was easy to put in code to display the extra text by a lookup in a separate file/resource. Plus it made internationalization very easy.


> That attitude may seem out of place today

That attitude was out of place at every point. Now it was excusable when RAM and disk space was sparse, it isn't today, it have entirely drawbacks


Even in that scenario that attitude seems out of place, considering a feature is implemented once and used many times.

But this vulnerability is enabled by a very creative exploitation of the complicated bind mounting scheme used by snap-confine. Just reading about these mounts between /usr/lib to /tmp and back triggered my sense of a potential security vulnerability.

Slightly tangential but I never ended up switching to nix (or guix) precisely because I don't fully understand the theory behind why things were done the way they were done and where the security boundaries are supposed to lie relative to a "regular" distro. I found plenty of prescriptive documentation giving me recipes to do anything I might be interested in doing but not much in the way of design documents explaining the system itself.

I never asked around so maybe that's on me. Debian works just fine though and containers are (usually) simple enough for me to wrap my head around.

I didn't end up using Flatpak for the same reason.


When you sandbox your apps on debian already, there should be no security difference doing so on nixos, no?

The globally accessible /nix/store is frigthening, but read-only. Same applies to the nixos symlinks pointing there. This vulnerability was enabled by a writable /tmp and a root process reaching into it. This would be bad on debian and nixos.


I'm not suggesting the presence of a vulnerability just that I'm not comfortable switching to a complex system where I have little to no understanding of the logic behind the design. My remarks were nothing more than a tangential gripe.

I love trackpoint for navigation and UI control. But it's not that great for drawing / painting (even though I did use it for that, successfully).

I frankly would care little about the speed; it can always be improved with a better propellant. I would care about a cheap ability to guide the rocket. If it's there, it may be consequential for a real (para)military application.

(A quadcopter is perfectly guidable, but it must be slower than a rocket, and costs more than $96.)


Guidance systems have speed limitations. Just because it works when slow does not mean it will work if you upgrade propellant.

I don't see it to be such a pain.

> Bundle a full application into a Single Executable.

Embed a zip file into the executable, or something. Node sort of supports this since v25, see --build-sea. Bun and Deno support this for a longer time.

> Run tests without touching the disk.

This must be left to the host system to decide. Maybe I want them to touch the disk and leave traces useful for debugging. I'd go with tmpfile / tmpdir; whoever cares, knows to mount them as tmpfs, which sits in RAM. (Or a ramdisk under Windows.)

> Sandbox a tenant’s file access. In a multi-tenant platform, you need to confine each tenant to a directory without them escaping

This looks like a wrong tool, again. Run your Node app in a container (like you are already doing), mount every tenant's directory as a separate mount point into your container. (Similar with BSD jails.) This seems like the only problem that is not trivial to solve without a "VFS", but I'm not very certain that such a VFS would be as well-audited as Docker, or nsenter and unshare. The amount of work necessary for implementing that is too much for the niche benefit it would provide.

> Load code generated at runtime. See tmpfs for a trivial answer. For a less trivial answer, I don't see how Node's code loader is bound to a filesystem. If it can import via https, Just use ESM loader hooks and register() your loader, assuming you're running Node ≥ 20.6.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: