Hacker News .hnnew | past | comments | ask | show | jobs | submit | tetha's commentslogin

SecuROM back in the day caused plenty of legitimately purchased copies to not work. You'd have a physical disc with the game on it from the store, and SecuROM decided it won't work on your computer for unknown, undebugable reasons. .

Piracy may be a problem, but that's a problem to customer who were willing to give a company money. We stopped buying anything with SecuROM on it after 1-2 of those situations.


I also have a Dungeon Crawl: Stone Soup with my first 3 runes around somewhere.

I'm aware I will probably lose it, but I'm also anxious to touch it. Maybe I should just get myself some good coffee tomorrow and get over with it. Biggest learning of that save is also how careful and defensive you have to play if you want to consistently get further.


DCSS has also changed so much, it's hardly the same game anymore. It's maybe a better game in many ways, but it's not the game I spent time getting to know and getting good at.

Maybe an early example of "forever games" like Minecraft which just keep getting expanded forever and move ever further from the game you knew.


Do have way too much fun with EICAR:

https://www.youtube.com/watch?v=cIcbAMO6sxo

This guy put the EICAR test string into a barcode and started to scan it on various systems, with rather funny effects.


And a lot of these older tools are not meant to be fed untrusted, unvetted input. The patch shown there confused me for quite a bit.

Or, more snarky: tee is also a huge security problem if you pipe untrusted input into `tee -a /etc/passwd`, such as `curl | tee -a /etc/passwd`. Not many things are safe with a `curl |` in front of them. I think yes might be?


The owner rug-pulls, or Broadcom buys the owner and starts squeezing.

Lifetime is the underlying issue.

For example, it is possible to create a vault lease for exactly one CI build and tie the lifetime of secrets the CI build needs to the lifetime of this build. Practically, this would mean that e.g. a token, some oauth client-id/client-secret or a username/password credential to publish an artifact is only valid while the build runs plus a few seconds. Once the build is done, it's invalidated and deleted, so exfiltration is close to meaningless.

There are two things to note about this though:

This means the secret management has to have access to powerful secrets, which are capable of generating other secrets. So technically we are just moving goal posts from one level to another. That is fine usually though - I have 5 vault clusters to secure, and 5 different CI builds every 10 minutes or so, or couple thousand application instances in prod. I can pay more attention to the vault clusters.

But this is also not easy to implement. It needs a vault cluster, dynamic PostgreSQL users take years to get right, we are discovering how applications can be terrible at handling short-lived certificates every month (and some even regress. Grafana seems to have with PostgreSQL client certs in v11/v12), we've found quite a few applications who never thought that certs with less than a year of lifetime even exists. Oh and if your application is a single-instance monolith, restarting to reload new short-lived DB-certs is also terrible.

Automated, aggressive secret management and revocation imo is a huge problem to many secret exfiltration attacks, but it is hard to do and a lot of software resists it very heavily on many layers.


Yeah that was an interesting discovery in a development meeting. Many people were chasing after the next best model and everything, though for me, Sonnet 4.6 solves many topics in 1-2 rounds. I mainly need some focus on context, instructions and keeping tasks well-bounded. Keeping the task narrow also simplifies review and staying in control, since I usually get smaller diffs back I can understand quickly and manage or modify later.

I'll look at the new models, but increasing the token consumptions by a factor of 7 on copilot, and then running into all of these budget management topics people talk about? That seems to introduce even more flow-breakers into my workflow, and I don't think it'll be 7 times better. Maybe in some planning and architectural topics where I used Opus 4.6 before.


I wonder if there are different use cases. You sound like you’re using an LLM in a similar way to me. I think about the problem and solution, describe what I need implemented, provide references in the context (“the endpoint should be structured like this one…”) and then evaluate the output.

It sounds like other folks are more throwing an LLM at the problem to see what it comes up with. More akin to how I delegate a problem to one of my human engineers/architects. I understand, conceptually, why they might be doing that but I know that I stopped trying that because it didn’t produce quality. I wonder if the newer models are better at handling that ambiguity better.


> The real question isn't "do you need a database" but "do you need state" — and often the answer is no.

We have a bunch of these applications and they are a joy to work with.

Funny enough, even if you have a database, if you wonder if you need caches to hold state in your application server, the answer is, kindly, fuck no. Really, really horrible scaling problems and bugs are down that path.

There are use cases to store expensive to compute state in varnish (HTTP caching), memcache/redis (expensive, complex datastructures like a friendship graph), elasticsearch/opensearch (aggregated, expensive full-text search), but caching SQL results in an application server because the database is "slow" beyond a single transaction brings nothing but pain in the future. I've spent so much energy working around decisions born out of simple bad schema design decisions and tuning...


The main drawback is that you will need to be able to patch quick in the next 3-5 years. We are already seeing this in a few solutions getting attention from various AI-driven security topics and our previous stance of letting fixes "ripen" on the shelf for a while - a minor version or two - is most likely turning problematic. Especially if attackers start exploiting faster and botnets start picking up vulnerabilities faster.

But at that point, "fighting fire with fire" is still a good point. Assuming tokens are available, we could just dump the entire code base, changesets and all, our dependent configuration on the code base, company-internal domain knowledge and previous upgrade failures into a folder and tell the AI to figure out upgrade risks. Bonus points if you have decent integration tests or test setups to all of that through.

It won't be perfect, but combine that with a good tiered rollout and increasing velocity of rollouts are entirely possible.

It's kinda funny to me -- a lot of the agentic hype seems to be rewarding good practices - cooperation, documentation, unit testing, integration testing, local test setups hugely.


Any `return c` for some constant is a valid and correct hash function. It just has a lot of collisions and degenerates hash-maps to terrible performance. That was in fact my first thought when I read "simplest hash functions".


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: