Hacker News .hnnew | past | comments | ask | show | jobs | submit | hhda's commentslogin

We assume we know the possible space for YouTube URLs, but it might not be a fair assumption.

Take phone numbers as an analogy. Without foreknowledge or without careful analysis of the distribution of phone numbers, you might assume all numbers are valid, but in fact 555-xxxx is always invalid within each area code [0]. For each set of reserved numbers our address space is that much smaller, which can skew the results of the statistics we gather from it if we don't exclude them from our original calculations.

It may be that YouTube reserves off certain address spaces (eg maybe it can't start with a 0, or maybe two visually similar values cannot be next to each other (eg I and 1), etc), which may make this sampling method (slightly less) accurate than it might otherwise appear.

[0] https://en.wikipedia.org/wiki/555_(telephone_number)


Yeah, the -M command is wonderful (super handy for ignoring minified files that you don't want to see results from, etc), and also great is the -g command (eg `-g *.cs` and you'll just search in files that have the .cs extension).

Also the fact that it is a standalone portable executable can be super handy. Often when working on a new machine, I'll drop in the executable and an alias for grep that points to rg, so if muscle memory kicks in and I type grep it will still use rg.


If you're a fan of the -g flag to ripgrep then I also recommend checking out the -t flag, short for --type, which lets you search specific file types. You can see the full list with `rg --type-list`. For example, you could just search .cs files with `rg -tcs`.

This flag is especially convenient if you want to search e.g. .yml and .yaml in one go, or .c and .h in one go, etc.


Thanks, I didn't know about `-t`, I'll read up on it.


-t is useful but -g doesn't require lookup in help first. Maybe the worse is better principle?


Tbh I've always just typed -t followed by something that feels intuitive and it's always worked. Never really bothered looking in help until I made the above comment.


You can add `#![forbid(unsafe_code)]` to your codebase to avoid any unsafe Rust, which should prevent buffer overflows. Obviously it may make writing a codebase somewhat harder.


Will that restriction also be applied transitively to all dependencies?


No. That kind of restriction cannot realistically be applied to any project above toy scale. The stdlib uses unsafe code to implement a large number of memory management primitives, because the language is (by design!) not complex enough to express every necessary feature in just safe code. Rust's intention is merely to limit the amount of unsafe code as much as possible.


For that, I believe you need to use cargo-geiger[0] and audit the results.

[0] - https://github.com/rust-secure-code/cargo-geiger


No, and in fact that would be impractical, because you can't do anything useful (e.g., any I/O whatsoever) without ultimately either calling into a non-Rust library or issuing system calls directly, both of which are unsafe.


From what I can tell, yearly Search ad revenue is in the neighborhood of $104 billion [0], and the number of yearly searches served by Google is somewhere in the neighborhood of 3.1 trillion [1]. This brings the revenue per search to somewhere between 3 and 3.5 cents.

[0] https://abc.xyz/investor/static/pdf/20210203_alphabet_10K.pd... (page 33)

[1] https://www.oberlo.com/blog/google-search-statistics


I guess the issue is that you assumed the type of egg and jumped to a conclusion about the egg you assumed was used in the experiment. You also assumed the environment - who is to say that the environment is earth? Etc etc.

The problem isn't with mathematicians.


I always find the postmortem of a major tech outage interesting. Ever since Google had a major outage with Maps, etc, I've been going back[0] to check for their breakdown of why they had the outage. Both the technical reason and the way it is explained is always something I'm interested in - like getting a peek behind the curtain, without a chance to see any details.

[0] https://status.cloud.google.com/maps-platform/incidents/8yNk...


Laravel 9 was released today (hence this thread) - Laravel 8 is the major version that came out directly before Laravel 9. Further, Laravel 9 is essentially Laravel 8 with a bump to the dependencies, with most higher impact changes coming as a result of those dependencies being updated[1]. The Laravel 8 From Scratch series has videos as recently as August[2], with the What's New in Laravel 9 series already having 11 videos[3], with the most recent posted yesterday. The Laracasts video series are still very much actively updated (with Jeffrey, who runs it adding new teachers in the last year).

And Laracasts is a 3rd party learning resource anyway, with the first party docs all being well maintained.

1. https://laravel.com/docs/9.x/upgrade

2. https://laracasts.com/series/laravel-8-from-scratch/episodes...

3. https://laracasts.com/series/whats-new-in-laravel-9


I think it's safe to view Laracasts as being "sold" as part of the documentation officially accessible - and documentation should be ready before a release is pushed into public release. I absolutely sympathize if I've got the wrong impression of Laracasts (though maybe they should be less frequently promoted by Laravel itself if that's the case) and I completely understand that it takes time to record updated audio-visual tutorials... but that's exactly the sort of thing I'm talking about - those prominent audio-visual tutorials existing and being pretty officially associated with Laravel means that new users trying to learn the system have windows where the documentation is confusing and out of date.

It seems like an unnecessary risk to have adopted given that it's not at all standard in the industry - sometimes language designers will give a version specific tutorial but they're understood to be unmaintained with the docos being the official source... Laracasts are a very different thing which appear to be mostly working, but seem really likely to dramatically and suddenly become more of a danger than a benefit.

One parallel I would draw on is the community effort, when PHP 5.3 or 5.4 came out, to purge all the terrible advice from StackOverflow. PHP had an issue, its documentation on php.net was fine and followed best practices, but the StackOverflow answers were often _terrible_ like - this will cause a big gaping hole in your security instantly terrible. It took a significant amount of effort to delete or properly answer these sources and since that happened the reputation of PHP has hugely improved. Bad documentation existing is worse than no documentation existing (but please don't take this as an excuse to not comment your code, just be conservative in your comments and keep it to a level you can actually afford to maintain).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: