HN2new | past | comments | ask | show | jobs | submitlogin

My favorite is when the website accepts your long random password but then the login fails. Because the set password function truncated the password before hashing it but the login function doesn't do that.


Indian NPS, equivalent of personal pension account, government operated, website forces you to change password every 90 days. Accepts any length for new password. Silently truncates it & then hash & store. Now, you are locked out because you dont know what was the actual truncated password they stored.

Happened to me two times, changed password, locked out, had to reset. then while logging in, I noticed their requirement of 12 characters, at next reset used 12 characters, & did not get locked out.


Yes, but that would be OK if the login form truncated the password in the same way as the password change form. They'd both arrive at the same hash. Which is what the comment above you said. What you describe probably means the password change form actually did run the hash on the full length, while the login form truncated it.


This is definitely not OK because it reduces the entropy of the passwords without the user knowing this has happened.


I never said it's OK, I just explained what's happening.

Ah, now I understand why I got those downvotes. What I meant by "OK" was that it would "work". It wouldn't exhibit the behavior my parent comment was describing. Not that it would be secure or good practice.


Ah that makes more sense, thanks for clarifying.


No, login form did not truncated, it showed javascript error toast that password is longer than acceptable.

The initial signup also checked & showed that toast. But the 90 day reset page did not.


This still happens, in 2020, with breathtaking regularity. Several times a year, a site won't accept my password, so I then start the whole "Will 20 characters work? No, that didn't work. Will 16? Nope... 15? Nope... 12? Nope. Oops I skipped 14... AH YES FINALLY."


One of the hats I wear these days is looking through source code for security vulnerabilities. It's shocking how many SQL injection vulnerabilities I find in newly written code. I remember first reading about SQL injection back in the 90s and yet developers are still making that very basic mistake. It is also a bit scary how many of the code analysist tools miss intentional flaws I've added to the code to test the scanner. These aren't ancient lint checkers, they're ridiculously priced enterprise tools. It worries me that they're giving a false sense of security that is going to get lots of implementations burned.


>These aren't ancient lint checkers, they're ridiculously priced enterprise tools. It worries me that they're giving a false sense of security that is going to get lots of implementations burned.

But no one is going to get fired or held back for choosing those 'enterprise' tools. They've spent a lot of money, everyone has a checklist, and people move on, even if there's a breach. And some jr might have pointed out "hey, this open source toolset does this, but is kept up to date and has found 47 things our scanner didn't find" and this jr will likely be ignored.


“Well with open source tools, you pay with time and we prefer this gold-plated, Cadillac, enterprise-grade tool. We’re a serious company now, after all.”

It’s maddening.


I'd argue it's bad API design. You can't make an API that's easy to use wrong and insecurely and then be surprised that people use it wrong and insecurely.


I can only half agree with you on that. Yes, I also dislike APIs that make wrong or unsafe use easy and correct use more bothersome but seemingly no different in behaviour (until it goes BOOM), but I also find that soooo many people simply don't have the awareness that they are interfacing with another system that interprets their data in a potentially unsafe way. And these people will misuse any API like this.


Unfortunately short of forcing everyone to use an ORM I don't see how we can block the unsafe API, which I'm assuming to be the string-based query interface e.g. `conn.query("SELECT * FROM users")` since any interface that accepts a string will allow a dynamically constructed string which lets developers open themselves to injection attacks. Only ORMs AFAIK can prevent this, e.g. db().users.all() or db().users.select(name="bob").

Maybe there's a clever trick I'm missing here.


It'd be nice if the languages offered a way for the query-compiling function to require that the query strings given to it are static, compile-time strings.


This will never change until engineers can be held criminally accountable for incompetence, particularly when personal information is involved.

Of course any proposal to ever hold a programmer accountable for literally anything is always unpopular on HN, for obvious reasons.


Because it's ridiculous to decide that the bottom of the decision graph should be held responsible for the poor decisions made from above.

Engineers should only be held accountable for decisions they made personally. The unfortunate reality is that, a terrifying amount of the time, terrible decisions are handed to engineering teams as required implementation details from managers, executives, product directors, etc. So should engineers be held criminally accountable for their product manager demanding MD5 hashes on passwords?


That sound you hear is millions of actual engineers, ones with qualifications/certifications/professional obligations/legal liability (think “civil”, “mechanical”, “electrical”, et al.) all rolling their eyes yet again at the code monkeys who call themselves “Engineers”, yet aren’t prepared to tell their boss that the decisions they’re trying to make, with inadequate training and experience, are dangerous...

Do you think the guy signing off on a bridge or power station design gets pushed around by “product managers” demanding shitty or cheapskate design or construction?


You’re welcome to disagree about calling them software engineers, but I’m not seeing how that is relevant or constructive to the whole point of this post and comment chain?


The point GP is making that civil engineers will not (knowingly) design an unsafe product just because they got something in a specification that led them to do so. They’ll object and refuse.

You were arguing that the specs or decisions made above their pay grade forced the software engineers into a dangerously faulty design which they dutifully created and shipped.


It's a bit circular, though, isn't it? It is in part due to the fact that a mechanical engineer _has_ that legal certification that they have the standing to meaningfully object, and actually be listened to by their PHB.


An engineer should be willing to tell his boss "No, because I would go to jail" If the manager insists anyway, the engineer should be obliged to refuse, even if that costs them their job.

Of course management should also be held accountable, but until engineers are forced to have some skin in the game they will continue to be as pliable as wet noodles.

Consider this: who better to blow the whistle on management than an engineer who knows they've been given an illegal order?


I have never seen an engineer who understood the issues be 'as pliable as wet noodles'.

What you're asking is for an engineer to be stuck in a place of legal culpability if they do the work, and to be fired if they don't. Added bonus: you mention whistle blowing, but who the hell would they whistle blow to?

If there's a proper industry or governmental group to blow the whistle to, that will also see the engineer financially compensated until such a time as they find a new job, then fine, it's fair to make engineers culpable. Otherwise, you're just making engineers suffer for the bad decisions of those made above them, by making them either legally liable, or risking their jobs.


Any employee who refuses an illegal order is risking their job. If your boss asked you to perform brain surgery and promised to fire you if you refused, would you give it your best try or would you tell him to fuck off? If software ''engineers'' want respect they need to grow a spine and learn how to say that simple two letter word.


Insecure websites aren't illegal. Only -breaches- are. And that only a civil offense, leading to a fine, the company pays. Right now, both the punishment and the decision are aligned.

You're looking to now make it so that there's a punishment levied on the developer, who has no more power to say no than they currently do. You want them to say no, but all you're doing is making it more unpleasant for them to not do so; you've done nothing to make it easier to do so.


> An engineer should be willing to tell his boss "No, because I would go to jail"

I guess we just fundamentally don't agree on how power dynamics effect these types of scenarios.


I agree with the GP, but I also think it is poorly phrased. It's not that the engineers should be willing to say they'd go to jail, it's that they should be able to.

Right now, the conversation goes, management: "I want x". Engineer: "X is insecure, we should do y instead". M: "Y will cost us X more than x, and it's never going to matter for us, anyway."

This puts the engineer in a position where they need to argue and justify the cost. Compared to "Sorry, I can't do that; it's illegal and I'd go to jail if I did that and was found out." Now the engineer doesn't have to justify anything. The law isn't a burden on the engineer here, it's a shield.

Yes, there are still some scenarios in which management insists. In my limited experience, that's in gray scenarioa where it's arguable whether the law applies. But the point is, it's much easier for an engineer to argue whether the law applies, than whether it is worth the money.

I think you can see these effects in the lengths companies go to protect healthcare data (hipaa) vs any other random personal data.


So this is a reasonable argument, but before we get there, there should be real responsibility placed on the company. Currently, when use data is lost, the companies suffer absolutely no consequences. They have literally no incentive to make sure their system is secure.

The equasion should be, for management: "Y will cost us X more than x, but if X is hacked we will get taken to the cleaners"

For the actual engineer to be held responsible you would have to add a formalised approval process, so that its clear who signed what off. Inagine you signed off on something, and then changes were made without your knowledge - that much easier to do with software than a bridge.


I guess my point is not that engineers should be exempt from being held accountable for their work, but that engineers are frequently asked to do things incorrectly/poorly/negligently and then assessed by their employers for their willingness to comply. Sure, you can say "Just stand up to your employer", but that's an incredibly dismissive stance on a complicated issue. Yes, you should say no to requirements that are flat out illegal, but is it ever that cut and dry? I'd be surprised to find it was ever that simple.


According to my engineering professional association, it is that simple. They'd have no qualms stripping me of my license to practice engineering for knowingly approving an unsafe design, regardless of what effect that decision would have had on my financial well-being.

The big difference is that companies need professional engineers. Professional approval on certain things is required by law. I'm not sure that would be a good idea for software, but that is what makes the system work for professional engineers.


> They'd have no qualms stripping me of my license to practice engineering for knowingly approving an unsafe design

Presumably there would be repercussions at the government level if a company repeatedly demanded engineers do things worthy of stripping their licenses though, no?


This varies by jurisdiction, but perhaps I oversimplified. There is another reason that doesn't happen.

The individual engineer doing the work needs a license, but the company itself also needs a permit to practice. The permit must be held by an engineer, who is personally responsible for the engineering that occurs under their permit.

So, the permit holder needs to worry not just about their own ethical behaviour, but that of all engineers in the company. They are incentivized to ensure the company will hold the public safety paramount, or to walk away if they cannot (thereby leaving the company without a permit).

If the company has a pattern of misbehaviour, it may be difficult to obtain a permit.


_Exactly_. Tech companies regularly exist purely based off of illegal business models (they call them _Disruptions_).

So, yes, we agree, if there are repercussions for a company regularly breaking the law, then engineers can and should refuse work that has negative legal or moral repercussions. But in the world of tech, that's not the case.


Wait a minute, I thought software engineers were in high demand, with companies fighting over them and opportunities everywhere! At least that's what one out of every 10 HN articles tells me. Surely that demand gives them a little power and agency over their work. I think we are pretending here that these developers have only one option: "Sure, boss! Whatever you say, boss!"

I personally believe that you, the software developer typing in the code, should hold yourself personally accountable for what you are typing in. You might also be designing what you type in, or even setting the requirements, but it might be other people. Regardless, you are making the software come into being--you're the one coding it and pushing it to the repo, so you should set the standard of what is acceptable. This "well, boss told me to do it!" rationalization and blame-shifting is how we get dangerous and unethical software.

And, yes, I have quit software jobs where I was asked to write software I considered ethically questionable, and failed to change the boss's mind.


> I think we are pretending here that these developers have only one option: "Sure, boss! Whatever you say, boss!"

No, I am suggesting that there is significantly more grey area between your moral highground and reality.

> I personally believe that you, the software developer typing in the code, should hold yourself personally accountable for what you are typing in

Yep.

> This "well, boss told me to do it!" rationalization and blame-shifting is how we get dangerous and unethical software.

It really is a strange world that, when corporations are attempting to turn profit on illegal behavior, it's the meaningless bodies-in-seats that we're trying to hold accountable.

I am, and have been, repeatedly, suggesting that the lowest level cannot be held accountable without holding the rest of the levels accountable. Jailing engineers for doing things their companies demanded of them is ridiculous if you're not also jailing those doing the demanding. I'm kind of shocked this isn't painfully obvious.

> And, yes, I have quit software jobs where I was asked to write software I considered ethically questionable, and failed to change the boss's mind.

Congratulations, that's a level of privilege lots can't afford.


The engineer should rather reply that the boss would go to jail.


Better if both would face repercussions, since if only the boss has skin in the game passive introverted engineers will silently follow orders, knowing that they're not personally risking much if they comply but risk losing their jobs if they don't.


Of course a certified accountable login screen engineer with a personal defense lawyer and a group of analysts, who make you a form:

  Please login
  Username: __________
  Password: __________
for $5.7M a year is always unpopular in business circles, for obvious reasons.


The mandatory GDPR training session at my current place in Berlin said that we could be personally liable with regard to personal information.

(Not that I can read German to the standard required for understanding laws and looking it up for myself; I can just about manage tabloid newspapers…)


My understanding is bcrypt truncates at 72 bytes by default, which could be 18 emojis I guess?

It should still be consistent with registration/login with things getting truncated, but I think this is also the default in PHP if you are using password_hash() today. Is that a security issue?


> Although implementing a maximum password length does reduce the possible keyspace for passwords, a limit of 64 characters still leaves a key space of at least 2^420, which is completely infeasible for an attacker to break. As such, it does not represent a meaningful reduction in security.

https://cheatsheetseries.owasp.org/cheatsheets/Password_Stor...


> My understanding is bcrypt truncates at 72 bytes by default

I don’t think it’s “a default”, it’s a fundamental limitation of the algorithm.

Specific bcrypt libraries could implement length-reduction by default though.


"Don't roll your own crypto" should also cover password management.


Oh this... I had something weirdly different recently. On a digital ocean account, all of a sudden I couldn't log in. Apparently the login page was updated and said my email adres was invalid. Which is weird cause I definitely logged in before. I used the + trick with Gmail, so I thought this was the issue, icw. new login page. Wasn't the case. Apparently my DNS service which also does filtering, blocked part of their CDN, also part of the email validation javascript. Turned that off, could log in again.


PayPal has this problem but only in certain change password forms. I think it was a password reset form. this happened to me recently. It was incredibly frustrating.


So that's what it was... god damn it


I can't believe sites think it's acceptable to do that. I get it, algorithms like Bcrypt have a size limit. But there are reasonably secure ways to get around that, for instance using HMAC to Sha256 the password before Bcrypting it.

Or better yet, pick one of the other algos with better limits and protection like Argon2 or Scrypt.


I had this happen recently with a finance website! Although technically the reverse, it silently stopped logging me in because the password field was changed to use HTML validation to enforce a max length of N, but they had previously accepted my password that was length N+1. Maddening.


I think FF recently deployed a change so that it no longer silently truncates inputs that are too long. It won’t solve the problem in all cases but hopefully in some.


The reason this is a problem even for new applications [1] is that it's the result of a leaky abstraction in common password hashing libraries which forces the site operator into a tradeoff between login service reliability/consistency and security.

Iterated password hashing algorithms like bcrypt [2] use a parameter tuneable by the site operator that drastically varies the computational cost of calculating the hash, so a brute force attack on the hash could be required to use orders of magnitude more computational power to crack. The tradeoff is that it will make users wait longer to log in, and (might) incur additional operational cost to the site. This is why it's a tuneable parameter: Up increases security at the cost of user delay and operations.

In theory. In practice, the user delay and operational cost are also proportional to the user's password length, which you don't control. Observe, the leak. This can cause wildly varying response times for authentication attempts, and opens your authentication system up to a DoS attack. The only reasonable solution is to put a small cap on password length so that the longest passwords don't take more than 2-5 times as long to compute, as per each site's tolerance of variance.

So why does the runtime of the iterated hash function depend on the length of the password? In each iteration the user's password is joined to the previous iteration's hash, and hashed together to get this iteration's hash. The runtime of any hash function is proportional to the length of the data fed in, the data fed in includes the user's password, thus the runtime of an iteration is proportional to the length of the user's password. So the runtime of the whole iterated password hash is `O(Iterations * PasswordLength)`.

There are various schemes one could come up with to change the time to `O(Iterations + PasswordLength)` ~ `O(Iterations)`, such as concatenating the first hash of the user's password instead of the password itself on all successive iterations, so all iterations but the first are independent of the user-chosen password length. There could be some security/entropy-based arguments for avoiding this solution, though I don't know what that could be.

[1]: https://hackernews.hn/item?id=22749706


Um...you're missing the point.

It's fine to truncate the password (though ideally you don't do it at a super short length).

The issue you are replying to is referring to when a site doesn't do it for both account creation and login. Which is a bug. A stupid, hard to detect, hard to explain, likely never to be fixed, bug. That only affects people trying to be secure.


I get that different parts of a service truncating the password to different lengths is a problem, and that it's a different (perhaps worse) problem than the one that causes site operators to limit password lengths.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: