Hacker News .hnnew | past | comments | ask | show | jobs | submit | grayhatter's commentslogin

> I am an engineer. I hire other engineers. I run a company that ships usable software for small businesses.

> We do this every day. I'm sorry to say, we are indeed shipping in days what used to take weeks.

I've been searching for months for evidence of this kinda thing. Do you have receipts you can share? Or is it more of the same "just trust me bro"?


I should put together a blog post to share more, but unfortunately it is more "trust me bro" at this stage. You can see a few other comments where I replied: we do have subjective evidence that seems to suggest to me that we're moving much faster than we could've moved in the past.

Of course, it's not just shipping, it's shipping stably in a way that doesn't disrupt the day-to-day operations of the businesses we're working for. One client that comes to mind has 2-3 niche SaaS applications that they used independently for various workloads. We completely replaced 2 of those without any disruptions to their business in about 6 months (no, we did not replace it feature-for-feature; we just built what they needed.)


> I feel its a very telling skill of engineers whether or not they can communicate issues in an effective manner urgently and figure out the best course of action to unblock.

I could be this person that appears not communicate, but the reason is because I've never had a manager that could unblock me faster than if I didn't tell anyone and just did it myself. For the longest time, every manager I've ever had was mostly useless (for unblocking some issue), it took quite a few years before I got an EM that actually makes shit happen. Only then did it become a habit I had to break.

It doesn't make sense to tell someone who can't or won't help you that you're blocked on something. Eventually you just default to never asking.


If you can do it yourself, you aren't blocked?

> Google's going through a lot of effort to release a model that will give everyone a very poor first impression of what on-device models are capable of, souring it for everyone for a long time afterwards.

I wonder what that will do for the competition between hosted genai and local models...


> Framing it around consent is unnecessarily inflammatory and makes it harder to have a discussion, not easier.

For me the most significant problem is the lack of consent. I assume it's just not how you want to frame the problem. Ignoring the problematic parts or behavior of some sort of behavior is a common problem in modern software, and it's actually what the article is complaining about.


I don't think this is a question of framing or ignoring problematic behavior at all. I'm quite certain that you wouldn't find it anywhere near comparably egregious if Google added a new developer option without your consent- the most significant problem is the 4GB and the LLM. And, of course, you did consent to their software terms. You are free to switch browsers. What does consent have to do with this?

yes, no one would have a problem with it if it were useful, so what, they're hypocrites if they don't like it because it's useless? actually, people generally only complain about consent when they didn't like what happened. the takeaway is that if it's an update that will be thrusted upon a user, deliver value for them. and it's your problem, not the user's, to persuade them that what you're thrusting upon them has value.

Use a different browser. Firefox works great. You're trying to negotiate with terrorists.

Exactly this. My issue with Microslop isn't that they're using AI, that is its own can of worms.

It's the fact that they were forcing it into MY computer, using MY bandwidth for THEIR profit goals. The lack of consent was the final nail in the coffin for me, no computer in my house uses Windows now, and it will at best be a long time before that changes.

I got rid of Chrome ages ago as well. Chrome's only redeeming feature is its user base. It's slower, uses more system resources, ugly as a browser, and now its an AI rapist too.


I'm not sure the uptime or UX of codeberg is meaningfully better?

If the model you're using tries to claim something that is false is actually true. Yes, your model is wrong.

no because it's still possible to find the data using standard techniques, it doesn't count as obsecurity it's still possible.

I.e. just because you* don't know where something is, doesn't mean it's using obsecurity to hide.

The reason is important, because words mean things: If you say, knowledge of some secret is security though obsecurity. That means passwords are security though obsecurity.

*: that may or may not be available to the attacker.

it other words, just because a secret exists, doesn't put that secret into the 'obsecurity' category.


> Security through obscurity, as an additional layer, is good!

If and only if the security advantage it gives outweighs the the numerous disadvantages.

It never* does. That's why the comment calling it bad got so many upvotes. Mixed in with those cargoculting the meme, are people like me who have had to deal with obfuscation techniques, written by someone else, that the bad guys understood before I did.

Obfuscation as a security measure is bad, because it feels like it's positives can compete with it's negatives; but that's rarely the case.

*: effectively never


> It never* does.

Can you substantiate this claim a bit?


> Can you substantiate this claim a bit?

I might, depending on how much objective evidence you'd demand for substance. The problem with obscurity as a security feature, is the group(s) that feels the impact the most. Heuristically, you can create 3 groups that are impacted. APTs, skiddies, devs/maintainers.

Obscurity will stop skiddies... effectively 100% ....but so will everything else. Regular updates, and a half good password also stops skiddies to effectively 100%. Obscurity has a net negative effect on the system because it doesn't increase the attack cost on skiddies, but does increase the maintenance cost to engineering. This is the vast majority of attackers you'd want to deal with.

Your Advanced Persistent Threats, will be slowed by some amount of obscurity, but they will not be be stopped by it at all, at best it's a road bump. Persistence and intent will overcome any amount of obscurity. Same idea as give enough eyes, all bugs are shallow. Given enough time, any/all amount of obscurity will be understood by the baddies.

The downsides to obscurity is it's cost both to maintenance of the system as a whole, but also to the heuristics non-security developers use to find/understand bugs. Obscurity introduces significant complexity into the system. That complexity makes it harder to find and understand real bugs, but worse, it also obscures the security of the system itself to non-experts. I do believe we'll never escape the reality that most developers see obscurity, and think "this 'secret' makes this secure" but obscurity is *not* a secret. It doesn't provide additional security/safety. It only marginally increases the time to find and deploy the exploit. Here, this extra layer of obscurity actually decreases security because developers will stop looking for ways to harden the system, because this "secret" is good enough. I.e. the obscurity added only has the opportunity cost replacing the effort that would have been spent on real security.

If you have a full security team managing the obscurity the maintenance cost to the system, will be less than the cost to your APTs, but if you don't have a full time team working on it (or at least a strong expert in both how obscurity benefits the system, but more important, a strong expert in real security), the maintenance costs will exceed the time/delay until exploitation.

It's likely that I could deploy some system securely, with obscurity layered over top. But I often don't because I don't wanna deal with it. It makes debugging harder. Depending on what specific flavor, it might also give the baddies a place to hide, because the signal of attacks no longer looks like the signal everyone expects.

If you can avoid *ALL* of the pitfalls. It will, without a doubt, increase the time to exploit. The problem being, if you're unable to avoid all the pitfalls of it, the negative aspects will have a much stronger cost and influence on the maintenance of the system, and the devs, contrasted with the cost it might have on any attack.


These things are always a cost/benefit calculation. Obscurity increases the cost, and it works well enough at that. It doesn't increase the cost by a lot, so it's not a super effective countermeasure, but it usually has a positive ROI because it's cheap to add.

Your complexity argument does make sense, but that also factors into the ROI calculation. I'd say obscurity is beneficial much more often than ~never.


> These things are always a cost/benefit calculation.

Yes, without a doubt! Just like all security measures. The difference being with real security measures, the security increases faster than the negative side effects. When using obscurity as a security measure, the gain is marginal at best, and even when done "perfectly" the down sides are significant.

> but it usually has a positive ROI because it's cheap to add.

My experience has always been the inverse. It is cheap to add, but more often it does nothing meaningful to the security of the system.

> I'd say obscurity is beneficial much more often than ~never.

Do you work in security? I used to think so too, back before I had to teach security fundamentals to the average software engineer.

To be clear, none of the above should be read as a refutation. I don't disagree with your opinion, per se. But in my experience, I've been frustrated many times when some non-security expert tries to add some kind of obscurity, but can't remember a single where I've thought "thank god we made this thing much more complicated". Sample bias to be sure... but if it was actually useful, I would assume I would have encountered a single time where I was glad for the added obscurity.

It's true that it's possible to increase the security of the system by adding some layers of obscurity. But not only have I've never seen it be worth the cost. The same is true about turning the system off... so when doing the cost/benefit calculation it's important to remember and account for the fact that going by history, it's never added meaningful security, and trying to work around it is almost always annoying.


I don't work in security now, but I have. I'm talking about things like changing the default SSH port from 22, not just making things complicated for the sake of it. I think this hinges on each person's past experiences, rather than the argument itself.

> Researchers are under no obligation to engage in coordinated disclosure and are free to sell 0day for profit.

Uh... no? If you mean legally, some people might, depending on jurisdiction. But also, ethically? yes, researchers are ethically obligated to disclose responsibly.

> Just fyi.

...

> Be glad it was disclosed at all. Be glad a patch was available prior to release.

I am glad that a patch was available. Equally I can be glad that the linux community is strong enough to respond quickly, while also being angry that this person behaves unethically.

Likewise, when people in my industry behave poorly, or unethically; I'm now the person ethically obligated to both point it out, and condemn it. Not to become an apologist demanding I should be happy watching bad things happen, when much of the fallout could have been prevented with a bit less incompetence and ignorance.


> but hardly illegal or forbidden any more than any other service restriction

Intentionally (or negligently) anti-competitive behavior is illegal in the US.

> Don't like it, cancel your plan.

Don't like being abused by a company? Just pretend it's not happening! Anyone else exactly as smart as you were, they deserve to be cheated out of their money too!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: