I agree with this. This is why every note I add must be repeated in spaced repetition style. This way I remember what is in my "mind palace" and keep ideas alive (or explicitly delete them / reduce their repetition period if they are bad). I use Obsidian with the repeat plugin.
• Ngrok pulled a pricing bait-and-switch a year ago increasing prices to $240/year/user if you wanted a stable subdomain, even for bandwidth-trivial users.
-
Edit: Looks like they now have an $8/month/user tier for a single stable subdomain and now offer some edge hosting as well.
$8/user/mo is still far too much for a stable domain without the spam-guard intermediary page, and I'm glad there's some free competition in this space now.
This is my first time using tailscale, and I set up and figured out funnel within fifteen minutes.
from what I can gather it provides the same functionality as ngrok without reaching for another tool. If Tailscale already exists in your networking tool belt this functionality comes really handy.
I built an entire app around the idea that every note participate sin the spaced repetition queue. For me it has made a lot of difference, as I have managed to internalize (as in put into a practice) a lot of the stuff that I put into my "second brain", for example insights from books I have read, videos I watched or blog posts, etc:
One intuition is that you can generate pairs which you know to be the “same thing” (a single example under heavy augmentation) and ensure they’re close in representation space whereas mismatched pairs are maximized in distance.
That’s a label-free approach which should give you a space with nice properties for eg nearest-neighbor approaches, and there’s, it follows, some reason to believe then that it’d be a generally useful feature space for downstream problems.
Note that most sample pairings, especially for images, is done through augmentations currently, so the implicit labeling you're doing is still weak on priors.
Of the methods mentioned in the article, BYOL (and even more the follow-up SimSiam [1]), have the weakest assumptions and work surprisingly well despite their simplicity.
I agree with Op that this is still essentially learning on labeled data.
I say this, since there are also cases of constrastive sampling like ideas with truly unsupervised data.
For example, Graph Embedding, where a graph implies structural features of similarity and distance that the representations should capture.
Like everyone else on this tread I built a note-taking system!
Mine is called MindPalace and its special feature is that it is focused on spaced repetition and remembering the notes after they were written.
For me, whenever I would take notes, they would become stale and forgotten. Despite Emerson's quote “I cannot remember the books I've read any more than the meals I have eaten; even so, they have made me.”, I felt that there is a lot more to be gained from remembering them.
For instance, I can remember meetings past and insights gained by following up on meeting notes, or insights I got from reading books. I have almost a 1000 notes in my personal notebook.
I mean less logical coupling inside the app. It's more in the family of a collection of libraries versus a batteries included framework. I'm considering that may be the best of both worlds. I never really liked the recreating the universe feeling of Flask.
Is there such a thing as true boostrapping? One always has to put some sweat/time in the pre-revenue phase. This time has a cost (opportunity cost + you could have had a job / did consulting)
I have no idea why you would want to declare an x in the loop, and then use it outside the loop. There are better ways of doing things.
The fact that you can do such a thing in JavaScript is exactly why JavaScript is such a mess of a language with its globally-declared and hoisted variables.
That sort of practice has never made sense, and is not a language design that Python should follow.