Hacker News .hnnew | past | comments | ask | show | jobs | submit | brownty's commentslogin

There is something I don't get it there though. Wouldn't you, with a debugger for instance, be able to see the program as it would have to deobfuscate itself for the processor to understand it ?


That presumes that being able to "see the program" reveals anything about the cipher or its key. In the white-box crypto model, it doesn't; the source code is itself "encrypted", so that its basic operations are visible but the precise sequence of operations it will taken given a specific input are Hard to determine.


Interesting possibility in white-box model is ability to take something invertible (eg. block cipher) and instantiate it in a way that cannot be inverted. That would allow for building fixed key signature mechanism with essentially arbitrary signature size equal to security level.

Another question is whether usable (ie. reasonable code and data sizes) and secure (with "hard" in the cryptographic sense, not in the sense of non-obvious) white-box cryptography is actually possible.


But this can't work without blackbox TPM chip that are not under your control anymore... Thus for this to work you have to give up general purpose computing?


No, it does not depend on a tpm.


That's a very conventional method of obfuscation. What the paper proposes is that the program would do a series of random actions which would add up to the original program + a lot of noise.


I don't claim to understand the details at all, but I would at least imagine the obfuscated program to be so complex and with so much indirectness and "virtual-machine"-ness that any kind of single stepping would yield an absolutely incomprehensible state at every point until termination.


I'm pretty sure that this wouldn't be safe. By simply disassembling the software you would see the strings stored in the application itself, so you could easily find any password stored that way.


that would mean you would have access to the source code. Or do you mean the software is stored in the memory as well?


In a compiled language, it would be part of the binary. If you opened up "checkpassword.exe" in a hex editor, you'd see the password clear as day somewhere in the static data section.

In an interpreted language, the interpreter would create some kind of string object containing the contents of the password in memory, and getPassword() would return that object when called.


Even simpler, Unix has a tool called "strings". Call it on any piece of data and it will return all readable strings contained in that data. Works on binaries as well.


good to know. I guess the best thing is to NOT store password at all in the the code? Rather in encrypted databases?


You should never store a password. If absolutely needed you could store a derivation of that password from a KDF like scrypt, and then see if the user's password derives to that same value. This is how checking for login passwords is done on Unix and Unix-like systems with shadow passwords (which never store the user's password).

And even with all that I probably screwed something up, but that advice is still better than storing plain passwords.


Nobody ever thinks of the case where a program has to supply a password and not just check it. For example, suppose your program (that runs unattended) has to interact with some JSON-RPC API, and the remote server expects a password. That password has to come from somewhere.


You should use public-key crypto for this case when possible. Any key you bake into the binary will be a publically-known password for anyone who gets access to that binary. With public-key crypto it's safe to include the public key and then make an ad hoc session key using secure key exchange.


You actually could get this from memory while the program is running, but even if it weren't, you could get it by using a hex editor. If you've got access to the "strings" command, that will also usually work for things like passwords.


ah ok, so what is the recommendation for storing password in a scripting language like

Ruby, Python, RingoJS, NodeJS, PHP, etc? Don't think they have fancy function to determine to store the variables in CPU register or encrypted memory?

I got the feeling that all these should have being taken care of on the hardware level, but someone just got too lazy during the design process and forgot to patch up the security flaw.

Kinda like how everyone was still using telnet to get into their *nix servers in the 90's.


If you're storing passwords on disk, you're going to use something like bcrypt (http://en.wikipedia.org/wiki/Bcrypt).

The way it generally works is you keep a user's encrypted pass with a unique per-user salt stored on disk (usually in a database of some sort). When you need to authenticate a user, your script will ask the user for their password. Then, you encrypt this input pass with the stored salt. Finally, you compare the encrypted input pass with the original encrypted pass on disk. If they match, you're good. At no point do you store passwords on disk that have not been encrypted (via bcrypt or similar). Depending on the security needed (i.e. your threat model and risk) this can get tricky if things like hibernation or virtual machines are involved.

If you're confused at all about this, I would browse the security forums (http://security.stackexchange.com/) and ask for expert advice.


> ah ok, so what is the recommendation for storing password in a scripting language like > Ruby, Python, RingoJS, NodeJS, PHP, etc? Don't think they have fancy function to determine to store the variables in CPU register or encrypted memory?

You shouldn't need to store passwords (or any sensitive info) in memory. In fact, you shouldn't store passwords at all. Normally, you would put the password in memory temporarily (for hashing or something), and then you would securely wipe it when you are done.

Failure to wipe memory (or use it improperly) can have awful consequences. IIRC, that was how the Target hack worked. Basically, the attackers were able to search through memory and dump credentials.

You should use wrappers for trusted/vetted libraries (like Crypto++ or something). I know for a fact that Crypto++ has a concept of "secure buffers", whose contents are securely erased before destruction. All libraries of that sort should wipe any sensitive information from memory (by overwriting it with garbage or zeroes) before freeing it.

That particular library probably does not have wrappers for other languages. However, GnuTLS and OpenSSL probably do.


I'm pretty sure it is (I might be wrong though)


I guess you'd also need a larger battery if you were to both compute graphisms and deal with the synchronization with other peoples over the Wifi. Plus it'd be lighter and in case of any collision there would be much less hardware that could be damaged, so I think sending rendered images over Wifi is actually a pretty cool idea


But the rift doen't support that, right? Is it even possible? Wouldn't there be a lot of latency?


And what's a stackoverse ?

[edit] is that even a theory ?


The score thing seems to be broken though. Clicking on forfeit doesn't seem to affect it


clicking on forfeit gets you the answer ( which is a shortcut for hitting "ESC" ) but it looks like you can only use Escape to go to the next question.

pretty cool quiz, i definitely don't know hardly any of these...


Ah, thanks! The help says to click again to go to the next one, but that doesn't seem to actually work.

Edit: ahh, it does work, but the first time I did it the red bar completely covered the forfeit link. Subsequent times it's been above and fine.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: