HN2new | past | comments | ask | show | jobs | submit | smilliken's commentslogin

Compress then encrypt is not an option because your encryption is broken if it can be compressed at all. Mathematically it's a near certainty that the compression would increase the file size when given an encrypted input.

You mistyped "compress then encrypt".

Your argument explains correctly that "encrypt then compress" is not an option, because in this order compress will do nothing, except wasting time and energy.

On the other hand "compress then encrypt" is more secure then simple encryption, because even a weak encryption method may be difficult to break when applied only to compressed data, because this use is similar to encrypting random numbers, i.e. the statistical properties of the plaintext have already been hidden.

The only disadvantage of "compress then encrypt" is in the less frequent cases when you are more concerned with deceiving traffic analysis of the amount of data sent than with saving resources, when you will pad anyway your useful data with irrelevant junk, to hide the real length of the data.

If you send data that is highly compressible, even if you want to hide the length it may be advantageous to first compress the data, and then pad it with junk before encryption, to hide the length.

Thus, you might e.g. compress the data 8 times, then double the length by padding and still send less data than without compression, while also masking the true length.


If you are saying that good encryption ought to make the result incompressible, then I agree with you.

That strategy may be cathartic, but it will have the opposite of the desired effect. If there's any hope of changing someone's mind, it has to start by respecting their opinion no matter how wrong you think it is. If you start a fight you'll get a fight.


I agree. Trying to punish will just deepen resentment, and they will live in their echo chamber while you live in yours. Then it's just side vs side, with the pundits leading the dialog.

We have to remember that we aren't all working from the same perceptual or moral framework. This is a struggle for me, as I love my parents but our believes have diverged considerably.

I think the challenge right now in the U.S. is that for many, it doesn't feel socially safe to question your own side. In reality, we need to feel free to judge actions individually, and judge leaders as a true accumulation of their actions. If we fear rejection from our party/family/friends for not walking in lock-step with the official party stances, that influences a lot of our thinking. No one wants to feel continually guilty about their own views (especially when there are social consequences for changing them), so we often shove aside conflicting details, make jokes, and signal to others that we're still a part of the tribe.

It sucks.


This may eventually work after a long effort, but in the meantime, the person with the fascist thoughts will be deriving social support from you not having cut them off. "Haha, I love fascism, and Bill still likes me and hangs out with me, so I guess I'm good!"


I'm sorry, but some opinions are not worth respecting. People who e.g. excuse the genocide in Gaza, deny what happened in Tiananmen Square or who insist that the Jan6 insurrections were "just tourists" should not receive a participation trophy.


You're going to look silly in 8000 years!


Why would they look silly?

We don't refer to the year 700 as 0700. Tt'll be perfectly natural in 8000 years to see "10025" for the year.


But it will be very marginally more challenging to correctly sort string representations of years!


Practically speaking, it's impossible to roll 6 one hundred times in a row on fair dice. Not technically impossible, but we each get to calibrate our skepticism based on how far out the probabilities are.

In this case we can be sure the dice aren't fair because there's significant motivation for them not to be, or at least it's easy to imagine a manufacturing defect in the dice.


This is a 1 in 50 chance we are dismissing as practically impossible though.


You can have this today or 15+ years ago using the excellent gevent library for Python. Python 3 should have just endorsed gevent as the blessed solution instead of adding function coloring and new syntax, but you can blissfully ignore all of that if you use gevent.


The best kind of documentation is the kind you can trust is accurate. Type defs wouldn't be close to as useful if you didn't really trust them. Similarly, doctests are some of the most useful documentation because you can be sure they are accurate.


The best docs are the ones you can trust are accurate. The second best docs are ones that you can programmatically validate. The worst docs are the ones that can’t be validated without lots of specialized effort.

Python’s type hints are in the second category.


I’d almost switch the order here! In a world with agentic coding agents that can constantly check for type errors from the language server powering the errors/warnings in your IDE, and reconcile them against prose in docstrings… types you can programmatically validate are incredibly valuable.


Do you have an example of the first?


When I wrote that, I was thinking about typed, compiled languages' documentation generated by the compiler at build time. Assuming that version drift ("D'oh, I was reading the docs for v1.2.3 but running v4.5.6") is user error and not a docs-trustworthiness issue, that'd qualify.

But now that I'm coming back to it, I think that this might be a larger category than I first envisioned, including projects whose build/release processes very reliably include the generation+validation+publication of updated docs. That doesn't imply a specific language or release automation, just a strong track record of doc-accuracy linked to releases.

In other words, if a user can validate/regenerate the docs for a project, that gets it 9/10 points. The remaining point is the squishier "the first party docs are always available and well-validated for accuracy" stuff.


Another example of extremely far towards the "accurate and trustworthy" end of the spectrum: asking a running webservice for the e.g. Swagger/OpenAPI schema that it is currently using to serve requests. If you can trust that those docs are produced (on request or cached at deployment time) by the same backend application instances serving other requests, you'd have pretty high assurance.

Nobody does that, though. Instead they all auto-publish their OpenAPI schemas through rickety-ass, fail-soft build systems to flaky, unmonitored CDNs. Then they get mad at users who tell them when their API docs don't match their running APIs.


Languages with strong static type systems


Is there a mainstream language where you can’t arbitrarily cast a variable to any other type?


https://histre.com does full text search on browser history


The best way is to open a capsule for each batch you receive to test it by taste, then store in the fridge.


> Second: there is no CEO in tech taking a smaller salary than their employees.

That's not just false but very often false.


It's the exceptional codebase that's nice to work with when it gets large and has many contributors. Most won't succeed no matter the language. Language is a factor, but I believe a more important factor is caring a lot.

I'm working on a python codebase for 15 years in a row that's nearing 1 million lines of code. Each year with it is better than the last, to the extent that it's painful to write code in a fresh project without all the libraries and dev tools.

Your experience with Python is valid and I've heard it echoed enough times, and I'd believe it in any language, but my experience encourages me to recommend it. The advice I'd give is to care a lot, review code, and keep investing in improvements and dev tools. Git pre commit hooks (just on changed modules) with ruff, pylint, pyright, isort, unit test execution help a lot for keeping quality up and saving time in code review.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: