I have spent many happy hours listening to Aaron talk about his experiences serving on submarines on YouTube and Twitch. He used to go by JiveTurkey then rebranded to SubBrief.
Side note, if anyone has watched The Hunt For Red October and thought "why can't I do this in a video game?" definitely pick up Cold Waters. The core gameplay loop isn't for everyone (depending on difficulty and era it can involve a lot of slow-paced stealthing around) but for me it's a ton of fun.
That's funny, I personally prefer HTTP for its simplicity, human-readability, accessibility, lack of centralized control, backwards compatibility and lack of forced upgrades or locking out old clients, etc., not to mention speed.
Of course, I'm fortunate enough to live in a place where MITM attacks are virtually non-existent, aside from WiFi portals and maybe ISP banners (which I've never experienced.)
> Of course, I'm fortunate enough to live in a place where MITM attacks are virtually non-existent, aside from WiFi portals and maybe ISP banners (which I've never experienced.)
I don’t know where you live, but i feel like this is more common and insidious than you think. For instance, in the UK Vodafone (or Three, I don’t remember exactly) would break 100s of our sites by injecting js and tracking pixels into the markup.
Now, with behavioural targeting slowly dying, ad tech businesses talking about fingerprinting as valid alternatives, and contextual targeting on the rise, I can guarantee you that the situation is going to get worse.
How insidious would you say that is compared to not being able to use an otherwise capable 7-year-old device to access most "secure" websites across the Web?
I only browse HTTPS sites. I have the `HTTPS Everywhere` addon installed with the `EASE` / Encrypt All Sites Eligible turned on so I don't accidentally browse an unencrypted website. Something like 85%/90% of the web is encrypted now, and there's no excuse to be using outdated plaintext http anymore. It's a privacy and security risk. There are only few instances where I had to view a http site (I'm a freelancer and my client's webpage was still unencrypted, so I had to see it, so a rare exception to the rule).
The privacy and security risk comes in large part from the nature of code and actions performed on the site.
In reality as far as privacy goes, the matters are on average opposite to your claim. Most sites that will put your privacy at risk today are using https - I am talking about the vast majority of the commercially operated web today. I know my privacy is much better respected on a plain text (no javascript) site using http then on [insert a top 10k most popular site here] using https.
And for security, if I am not performing for example shopping or entering my billing details anywhere on the site, I do not see how a http site can compromise my security.
I actually prefer deploying http sites for simple test projects where speed is imperative because they are also faster - there is no SSL handshake needed to connect.
It's funny because I got like 70% HTTP in my index, so the whole "90% of the web is encrypted" seems to depend on which sample you are looking at. Google doesn't index HTTP at all, so that's not a good place to go looking for what's the most popular. That's in fact half the reason why I built this search engine in the first place, because they demand things of websites that some websites simply can't or wont comply with.
A lot of servers still use HTTP, for various reasons. There are also some clients that can't use HTTPS.
I think there are absolute numbers and then there are "the sites most people visit regularly" and those probably are 75% https. It's relative like most things.
Absolute numbers are pretty hard to define, as is the size of the Internet.
If the same server has two domains associated with it, does it count twice? Now consider a loadbalancer that points to virtual servers on the same machine. How about subdomains?
It may be a privacy risk, but it's certainly not a security risk with plain old blog and static sites that have completely open data available to anyone who wants to surf to their sites.
HTTPS is a still privacy risk because the hostname is sent in plaintext. Perhaps you get some "URL privacy" but you get no improvment in terms of "hostname privacy". HTTP only leaks the hostname once. HTTPS leaks it twice.
This can be prevented by (a) using TLS1.3 to ensure the server certificate that is sent is encrypted and (b) taking steps to plug the SNI leak; popular browsers send SNI to every website, even when it is not required.
You should be getting fewer .txt-results in the new update, a part of the problem was that keyword extraction for plain text was kind of not working as intended, so they'd usually crop up as false positives toward the end of any search page. I'm hoping that will work better once the upgrade is finished.
This is a tool that I found recently. In the 0.2.0 release, there's support for multi-region queries which I've been really excited about. Now I can do a single query without having to doing merges in post-processing.
SQL joins across different APIs has been really handy too. I can do comparisons of my AWS, Azure and GCP compute resources in a single query.
I too don't need any of the features that cryptocurrency brings. However, cryptocurrencies are making substantial headway on solving the hardest class of problems that humans face: coordination problems. Common examples include the Tragedy of the Commons and collusion [1].
Vitalik has written some surprisingly approachable pieces on the various problems that cryptocurrencies face. It's not too much of a leap to see how these problems appear in non-cryptocurrency contexts. The value I see in cryptocurrencies is that they must develop robust/antifragile solutions to the coordination problems that have plagued society forever. Even if some problems can't be solved, then there are mental tools for making sense of what's going on.
Your assertion about generalist ~= founder is pretty spot on. I've spent decades working lots of different job titles that interest me (or as the needs of the organization dictated). Putting all that down on my resume hasn't helped me much.
I was heading the direction of very focused resumes. I've also landed a low pressure founder role too, so that's nice.
Mike Monteiro has an excellent talk[1] on contract negotiation and where to get lawyers involved. The pertinent point for me was the following:
Contract negotiation makes them fair. If you just sign whatever people put in front of you, you give up your rights to have a fair contract.
Also:
The contract given to you probably represents the opposing lawyers wish list of all the things they wish they could have. Most/some of these things will be extremely disadvantageous to you. Just signing anything they give you means you submit to their wish list, not your own.
SQL injection attacks are an excellent example where code and data are mixed. One solution is to do a lot of clever escaping of 'attackable' characters that instruct the DBMS to stop treating a character string as data and start executing things [1]. Escaping attackable characters attempts to partition data from code. This usually works but not perfectly.
Or, run your data through stored procedures instead. It took me a while to figure out why stored procedures were so much more secure than regular queries. I finally figured out it was because a stored procedure does exactly what the grandparent post says: It treats all inputs as data with no possibility to run as code.
Hmm. I'm going to have to disagree about Stored Procedures providing security. You can do all sorts of bad things using stored procedures that may result in unintended code execution!
I think they're more useful for organization and abstraction than security. Then again, a well organized and smartly abstracted system can lead to better security!
But I think bind parameters are probably a better example of security.
Binding effectively separates the data from the logic. So you define two separate types of things, and then safely join those things together by binding them. It doesn't matter too much whether that happens in the application making a call to the database or in the database in a stored procedure. Obviously this same concept can be applied at many different points along the application stack. The analogous concept in the UI is templating. You define a template and then safely inject data into that template.
> I finally figured out it was because a stored procedure does exactly what the grandparent post says: It treats all inputs as data with no possibility to run as code.
This isn't well defined. Take this pseudocode stored procedure (OK, it's a python function):
You can provide any input to that. You could think of this as a function which "treats all input as data with no possibility to run as code" (it never calls eval!). But you could also usefully think of this as defining a tiny virtual machine with opcodes 1 and 2. If you think of it that way, you'll be forced to conclude that it does run user input as code, but the difference is in how you're labeling the function, not in what the function does.
The security gain from a stored procedure, on this analysis, is not that it won't run user input as code. It will! The security gain comes from replacing the full capability of the database ("run code on your local machine") with the smaller, whitelisted set of capabilities defined in the stored procedure.
> The security gain comes from replacing the full capability of the database ("run code on your local machine") with the smaller, whitelisted set of capabilities defined in the stored procedure.
The security gain is that it you are only able to run queries that the DBA allows you to. If you can't write arbitrary queries, you won't get arbitrary results. If you can only run a stored procedure, you are abstracted away from those side effects. Another way of saying this -- the security risk is shifted from the app developer to the DBA. Someone is still writing a query (or procedure code), so there will always be some risk.
The security gain is that it you are only able to run queries that the DBA allows you to. If you can't write arbitrary queries, you won't get arbitrary results. If you can only run a stored procedure, you are abstracted away from those side effects. Another way of saying this -- the security risk is shifted from the app developer to the DBA. Someone is still writing a query (or procedure code), so there will always be some risk.
This could also be achieved with a well written microservice/package that developers go through without depending on dba.
The philosophy and semantics are an interesting side issue, but I'd say the default meaning of those words is that your data, in the SQL system, is not treated as SQL code.
Stored procedures are bad in so many ways - they harder to deploy and revert than code, harder to unit test* , harder to refactor and every implementation that I have ever seen that has business logic in stored procedures instead of microservices/packages/modules have been a nightmare to maintain.
* At least with .Net/Entity Framework/Linq you mock out your dbcontext and test your queries with an in memory List<>
Disagree. I've implemented unit tests that connect to the normal staging instance of our database, clone the relevant parts of the schema into a throw-away namespace as temporary tables, and run the tests in that fresh namespace. About 100 lines of Perl.
That was five years ago. These days, it's even easier to do this correctly since containers allow you to quickly spin up a fresh Postgres etc. in the unit test runner.
It’s even easier and faster when you don’t have to use a database at all and mock out all of your tables with in memory lists. No code at all except your data in your lists.
It also need not be correct. If you're only ever doing "SELECT * FROM $table WHERE id = ?", you're fine, but a lot of real-world queries will use RDBMS-specific syntax. For example, from the top of my head, the function "greatest()" in Postgres is called "max()" in SQLite. How is it called in your mock?
Mocking out tables with in-memory lists adds a huge amount of extra code that's specific to the test (the part that parses and executes SQL on the lists). C# has this part built in via LINQ, but most other languages don't.
By the way, I see no practical difference between "in-memory lists" and SQLite, which is what I'm currently using for tests of RDBMS-using components, except for the fact that SQLite is much more well tested than $random_SQL_mocking_library (except, maybe, LINQ).
You are correct, if I were doing unit testing with any other language besides C#, my entire argument with respect to not using a DB would be moot. But I would still rather have a module/service to enforce some type of sanity on database access.
The way that Linq works and the fact that it’s actually compiled to expression trees at compile time and that the provider translates that to the destination at runtime whether it be database specific SQL, MongoQueries or C#/IL, does make this type of testing possible.
Yeah, I thought the same thing until I found a colleague who was very fond of calling exec_sql in stored procedures, with the argument being a concatenation of the sp arguments.
It's mobile centric, browser based and gives you access to Messenger. Also, doesnt require the "hack" of requesting the desktop site (which may or may not work.) The trade-off is that lots of automatic Javascripty things don't work the same way.
Watching him play Cold Waters is a delight.