HN2new | past | comments | ask | show | jobs | submitlogin
Take more screenshots (2022) (alexwlchan.net)
369 points by sanketpatrikar on Jan 28, 2023 | hide | past | favorite | 176 comments


When I started making a game [0] last year, first thing I did was write a little Unity script that takes a screenshot of the opening scene, counts current lines of code using CLOC [1] (for fun, not as a true measure of anything), and occasionally renders it all out to an image file.

With that I'm able to create some pretty fun time lapses of progress. I've been doing this at an arbitrary milestone, whenever my Luau [2] LOC surpasses C++ by another factor. This post reminded me I'm overdue for another now that Luau > 3x C++ LOC.

I find it rewarding to look back at my progress. I'll share in case it's interesting for you too [3].

[0] https://store.steampowered.com/app/2168330/Helmscape/

[1] https://github.com/AlDanial/cloc

[2] https://luau-lang.org

[3] https://twitter.com/kineticpoet/status/1619508466212831232


Nice, love the time lapse gif!


Thanks for sharing! Very interesting to see the development in the timelapse. Going to do this for my next Unity project.


The time lapse on your twitter is great... really fun to see those simple shapes and environment get more detailed, better lighting, shading, etc. Thanks for sharing!


Thanks for sharing, Super cool. I wishlisted and downloaded the demo.


I made my first website on geocities and it kills me that I cannot for the life of me find a record of it any longer.

It was a website that "sold" replica/counterfeit watches.

I was enthralled with them as a child for some reason. I knew they were popular items on the fledgling internet so I created a webpage for them on geocities and setup an email address to contact if you would like to purchase one.

I had lots of interested parties who wanted to buy the watches but unfortunately my parents would not front a 12 year old the cash required to purchase them in bulk from the shady internet supplier I had found. Probably a smart move on their part.

Fun times!

I ended up purchasing a few really high quality replicas when I visited China decades later as an adult.

I miss the early internet.


Geocities was reasonably well archived, do you remember the URL?

https://archive.org/web/geocities.php


I'm in a similar situation, I even remember more or less how it was visually but for the life of me I cannot remember the URL or what nickname I could have used, I cannot find it by my 16yo me nickname I remember :(

Edit: reading the page, it's a scrape from 2009. My page was from 1997-1998 and basically used only by me and my circle of friends, chances are it has been lost forever.


archive.org geocities scrapes go back to 1996, so it is plausible it could have survived:

https://web.archive.org/cdx/search/cdx?url=geocities.com&mat...

If you ever remember any of the details, the CDX API can probably help.

https://github.com/internetarchive/wayback/blob/master/wayba...


Iirc You can search web archive by string, not just url


I was worried about same issue as well, although mostly in the area of website/knowledge screenshots as my desktop screenshots are pretty low in quantity.

Recently decided to selfhost Archivebox.io app. It feels a bit rough but delivers decent results. I love the PDFs and single file HTML dumps.

Managed to secure some very old sites I relied on for ages but never really saved them. All can be refreshed, dumped again, tagged, searched. UI is actually a Django admin page.


If I were your father, I would have been damn proud of you. I wouldn't have fronted you the cash either though.


I'm in a similar situation too. I had a website in 2000 hosted at cjb.net.


Try the archive.org CDX API, you might be able to find it.

https://github.com/internetarchive/wayback/blob/master/wayba...


My first website, which I started in 1999, was first crawled by archive.org in 2005. I think that it was a few years late the computer science department decided to no longer allow alumni to have free home pages, and I lost access to the site.


I take screenshots all the time, and sync them to iCloud so I can access them immediately on my phone/iPad. I also back up previous years of screenshots, I think I have back to 2015 right now.

But one thing that I've been keeping my eye on as well (and used for a month) is https://www.rewind.ai/ which records everything that happens on your M1/M2 mac screen and is immediately searchable.

Just like Alex says here "They’re not as good as having the original, working thing – but they’re much better than nothing. I can dip in quickly and easily, and instantly be reminded of the creativity of my past self." And I think arguably Rewind solves for that completely (with the added cost of increased storage space and less specific capturing/resolution).

This isn't a pitch for their product, it's just a natural progression to screenshots/capture that I believe is relevant here.


This looks absolutely amazing, it honestly feels like someone made this program just for me because I kept nodding to myself as I was reading through the feature list. Just what I want! But I'd never use a closed source software for something as personal as recording everything I do on my laptop. They can have Wireshark traces and weekly audits with a digest going straight to my inbox, proving nothing ever leaves my Mac, but I still wouldn't use it. It'd just feel unsecure even if it realistically probably isn't. Hopefully we'll see something like this as a fully OSS solution some day.


Ubuntu ..14 or so.. had a really good activity journal, where you could see on a timeline every file you touched, website visited, song played, etc. It was not quite a continuous video, but it was so helpful. Something changed that seemed to have stopped the whole project working on newer versions. Maybe the zeitgeist log it relies on is not used by applications so much any more

Edit: found it https://manpages.ubuntu.com/manpages/jammy/man1/gnome-activi...


As an added note, I stopped using Rewind because of the "steep" monthly cost ($20 a month), and the fact that I'd only want to use it as a backup to use long term.

Upon reflection, $240 a year to have completely uncut video of my computer screen instantly searchable forever is a very valuable offering. Not to mention the fact that the file size is fairly small compared to large 4k videos.

If it were built into macOS I'd be blown away and use it forever, still on the fence about it.


If the rewind feature set—especially the search through screen recording aspect—is the most compelling, then perhaps something I've been working on may be of use. It can basically do everything rewind can (though audio support is a WIP), doesn't cost anything, and feedback would be really helpful.

Shoot me an email at govind <dot> gnanakumar <dot> com if you're curious or interested in beta testing it.


I'm interested, and I'm attempting to email you, though I can't quite figure out the email address. I keep getting an address not found error.


I poked around their post history and their email is actually: govind <dot> gnanakumar <at> outlook <dot> com


Wow thanks for that! Much appreciated.


I also stopped using it for the cost, even though I loved the software.


It's a niche product it seems. I have zero interest in it.


I mean, I have zero interest in Instagram, but that doesn't make it a niche product.


That makes sense. This archive tool is even more out there because it's not a direct link or file of the content, but just a capture of what the content would be. My work is mostly UI/UX and design related, so the capture is actually pretty valuable.

I'm a bit of a digital packrat, and sometimes like to review personal files from nearly a decade ago, so I think something like this would increase in value for me over time.


Sort of. The more you hoard, the more painful it is to trawl through the hoard to review. It reminds me of Linus Torvald's "Only wimps use tape backup. REAL [adults] just upload their important stuff on ftp and let the rest of the world mirror it." -- let your work be remembered on the basis of who it had an impression.


I've been looking into Nextcloud/Photoprism for a more efficient storage/browser experience, but honestly the software seems pretty amateur so far (vs Google Photos / Apple Photos).

For now, everything is loosely store on an SSD, with different folders for year/month/day (of backup). Screenshots are stored by year.

I'd really like a google photos browsing experience for all my data backup, regardless of content type (well, with filters).


That quote is partially missing the point here : part of the value to you is the emotional one as the creator of that thing - the most obvious example from a slightly different domain being baby photos.


Rewind looks cool - almost magical (I imagine part of the magic is due to the M1/M2 chips).

But I would be concerned overly relying on them. They've raised VC money, which means their future path is unclear. I don't know what direction their product will take if pushed by VC growth expectations. In particular, this is just a client-side only app with a pretty clear and finite feature set. But VC influence means there's a risk they will be shoehorning in features and online capabilities to promote growth.


(I’m the co-founder & CEO of Rewind.)

While it’s true we raised money from VCs we did not give them a board seat or voting control. I have super-voting shares and am the only member of the board. We will never be pushed around by VCs.

Our vision is to give humans perfect memory and we will not let VCs get in the way.


Hey well I'm glad to see you in the replies to one of my comments! Love what you're doing, and I'm following updates.


A reminder that iCloud Photos is not end to end encrypted, and that both Apple and the US federal police (FBI et al) have warrantless access to the contents of iCloud, so you are creating a huge trove of data that could be misused against you by police at any point in the future should it be politically expedient to do so. Screenshots frequently contain all sorts of extremely sensitive information.

This may not be part of your threat model, but it should at least be known by people so they can evaluate the risk themselves.



Approximately nobody has this turned on.

It's opt-in, so approximately nobody ever will.

Everyone you iMessage with will still be putting all of your conversations and attachments and iCloud message sync keys into non-e2ee backups from their end, so turning this on won't accomplish much even if you know about it.


> Approximately nobody has this turned on.

It doesn't matter as long as the person storing screenshots in iCloud turns it on.

> Everyone you iMessage with will still be putting all of your conversations and attachments and iCloud message sync keys into non-e2ee backups from their end

Weren't we talking about storing screenshots in iCloud photos?


Tomorrow it might become opt-out.

But I still wouldn't trust Apple 100% : we know that they were among the companies silently cooperating with the NSA, and the potential for backdoors in their software isn't nil. (Whether you should consider this as a real threat depends on your circumstances of course.)


Actually came here to mention rewind, too.

It’s simply put a groundbreaking game changer for me.

Be it finding text in conversations, using it to recall the face of an applicant when their name doesn’t ring a bell, find code, find text in websites you visited, going back to make sure your eyes didn’t trick you, …

I recently started to enable closed captions in all applicant interviews so I can search for specific terms I recall after the fact.

Truly amazing.


How does it work when a simple search returns too much data? Or is that an edge case? I think that if you used Rewind for several years, the amount of hits a specific search would return is insane (for example, searching for "dog", if you remember seeing a specific dog breed some time ago, but not sure when or where)


one kinda similar app on windows is ManicTime https://www.manictime.com/ which can track opened apps, documents and URLs, and put them on a timeline. it can also automatically take screenshots (on a rolling period, limited to paid version), but I find the limited free version to be quite useful in itself.

though, while it's neat to have activity history be accessible like this, it still doesn't compare to the value of intentionality of manual screenshots, and bookmarks, and notes, and files, and stuff like that. while it may be neat to be able to get back like this to something you missed while you browse, if it's done only to find it and put it down as a note/bookmark/screenshot/file, you just come back to systems that are already present and searchable.


I have been taking screenshots for years, but it developed into a hoarding OCD. And it is ruining my life. My regular life is being interrupted by taking screenshots all the time, so I have relatively less time to work on important things. I don't even view those screenshots again in the future so the time spent on those screenshots is completely wasted. I don't recommend having a habit of taking screenshots to those that may be vulnerable to OCD.


> My regular life is being interrupted by taking screenshots all the time, so I have relatively less time to work on important things.

I think I lack perspective on this and am not quite understanding what you mean. Where is the interruption coming from? Why do you need to take screenshots?


> I think I lack perspective on this and am not quite understanding what you mean. Where is the interruption coming from? Why do you need to take screenshots?

I have this too but it's in the form of saving URLs of interesting things. I just fear stumbling upon something nice, not saving it and then not being able to find it again (which happens regularly btw; Google search and reddit search are pretty bad at finding things when my only context is "I saw it last week or something")

Honestly it's exhausting.


It could be worse once you realize that webpages can disappear. So your saved URL doesn’t even help, you’ll actually need to save the page as some kind of archive.


The tool I mentioned above auto save the full text content of your visited web pages locally so you could look back even if the server is down


Safari (and I think other browsers) has some great history search features. You maybe could explore those.

https://support.apple.com/guide/safari/search-your-web-brows...


>Your Mac can keep your browsing history for as long as a year, while some iPhone, iPad, and iPod touch models keep browsing history for a month.

>Your History shows the pages you've visited on Chrome in the last 90 days.[1]

I'm not sure if nextaccountic would find 1 year or 1 month or 90 days sufficient.

[1] https://support.google.com/chrome/answer/95589


I don't use macs.. Firefox history is pretty much useless but there are some extensions like https://addons.mozilla.org/en-US/firefox/addon/firefox-bette... .. but none really satisfying

Generally the best extension to save things is https://addons.mozilla.org/en-US/firefox/addon/tab-session-m...


You can use a personal search engine, it runs locally, allowing you search back the content you visited without SEO junks nor privacy leak.

https://github.com/beenotung/personal-search-engine


Doesn't yacy do the same thing?


Sounds like it's a compulsion, same as people who check the stove or faucet just in case, GP takes screenshots just in case.


Just set up a screen recorder to record a frame once every second into a video file.


I used to use TimeSnapper for that. The classic version is free.

It did use a crapload of disk space though (20GB per week?), and most of the data is almost identical, so I started designing an algorithm to store only the differences between images before realizing I had reinvented video codecs... so I just made a ffmpeg one liner to convert the image sequences to mp4 :)


You can capture straight from the GPU into the GPU's video encoder and directly get h265 frames efficiently without laundering the whole framebuffer through system RAM each time.


Nvenc is actually quite nice and easy to use for that, but if you want to do any serious post processing you'll need to parse h265 (headers at least) and it's not the easiest parser to write.


How can I do that?


On Windows, there's an API to capture the display as a DX texture. Then either use NVENC or AMF to encode. You'll get compressed frames that you can just stuff straight into whichever video container you like.


Good thing you realized this is exactly what video codecs have been doing for decades. Thanks for the chuckle, though.


Since you used the phrase “my regular life is being interrupted,” it really does sound like a bit of a compulsion. I’m sorry to hear that.

Have you tried talking to a psychologist or a psychiatrist? OCD and similar disorders are hard to “cure,” but I know from close friends and family that therapy and some drugs both can be helpful in giving you back some control, as well as dealing with some of the itinerant anxiety and depression.

Either way, hope you’re doing okay!


You can download existing auto-screenshot-tools or even setup a cron to take them every 2 seconds. Maybe it could reduce your OCD by knowing it is always being done for you?


I feel like screenshots every 2 seconds would take more space than a full screen video capture.


It does! I ran into this issue (disk filling up with screenshots) and realized that the optimal compression solution already existed and was called a video file, so set up a script to run on cron to convert the folder full of pngs to a mp4.


What size differences are we talking about here? I took one screenshot (+webcam shot) once every hour for three years and it takes up around 5gb for 14,000 images. I believe I compressed it slightly some time ago.


Yeah, TimeSnapper takes them every 5 seconds, so I'd get 5-10 thousand images per day, about 1GB per day depending on quality settings.

I appreciated the (relatively) high frame rate because I enjoyed watching the timelapse at the end of the day, as a kind of review.


I never got in this habit, and I guess I'm old enough to only become aware that it is a habit recently.

I do take screenshots while debugging or if I want to show someone something curious, and I'll take those pretty aggressively just in case (the files are small, hard drives are cheap). But I feel no compulsion here.

Where does the drive to take so many screenshots come from?


For me probably in thinking I found something interesting that I could share with my friends, but eventually it developed into taking screenshots for even the most remotely interesting stuff, which no one really cares about but me.


  $ ls -1 | grep scrot | wc -l
  76422
That's just this laptop. My file server's broken at the moment (ZFS redundancy FTW \o/ first ever disk failure) and it has a few more moved onto it from different machines (probably maybe 50-70k), and then there's probably 20k on my previous desktop I haven't moved over. Hrm, and then there are my phones... maybe 200k all up?

I'll eventually figure out an aggregation and OCR pipeline. In the meantime while circumstances don't permit that I've slowly accepted the scale I've decided to operate at. It's a commitment. It started out as OCD and now it's just... an interesting habit I actually think would be suboptimal to break. I've never known how to organize words into a journal format, so this is the next best thing I've got.

And it's fun holding down the 'p' key in sxiv and just rewinding through the flickery slideshow of interestingness. Literally everything has a story in it. It's fun.


I should put everything into CLIP and make a semantic search engine. (Also OCR is a good idea)


I and many others can easily empathize if we look back at our phone camera libraries, full of mundane momentarily interesting images from our lives.

But what screenshots trigger that capture itch?

One thing I have (on my phone) are high score/achievement moments in games. Ironically, Threes game has its own internal hall of fame, and the UI is made up like a framed picture on a wall.


It's quite a common thing, Xfire went even further and introduced its Flashback feature 15 years ago :

https://www.extremetech.com/extreme/82410-xfire-gets-video-c...

> The TiVo-like function allows users to go back and record footage five to ten seconds after it happened.


I had a script once which generated a screenshot every minute or so. The idea was that I would then use some other machine learning supported scripts to extract some statistics. One motivation was that I needed to collect working time statistics, and I wanted to count the minutes that I had Eclipse open.

I even wrote a DB to store PNG files more efficiently. It deduplicates blocks, and thus achieves much higher compression rates: https://github.com/albertz/png-db

The analytics were harder than I thought. I had OpenCV at hand, and tried using those SIFT features (if I remember correctly) (note, that was 12 years ago, before we had more powerful neural networks), and it took me lots of trial and error via a lot of ugly heuristics, and in the end I just tried to identify the Eclipse icon in the Mac Dock. But it worked more or less.

And the scripts to analyze the screenshots: https://github.com/albertz/screenshooting

Then, I developed some scripts which would collect such information more directly, about the app in foreground, including the opened file or URL, etc. This is still running, with many years of data now. But I never really had any use of that data. Maybe someday I will extract some interesting statistics out of it.

This script is here, with support for Linux and Mac: https://github.com/albertz/timecapture


I made something similar with "brotab", grabbing full text content of all open tabs of the browser (and other metadata) every so often and archiving it, so I can search it like a kind of extended history and correlate URLs by context.


> For example, my oldest files were made in Microsoft Word on an iMac G3 running Mac OS 9. I can open them in a modern word processor, and they look similar – but it’s not the same. Some of the fonts and graphics are missing, and I don’t know where I’d find replacements.

> It’s even harder for an undocumented side project I abandoned years ago. Having the code isn’t the same as a working application.

The author's solution to this is apparently screenshots, I have to respectfully disagree.

For software, side project or not, it should probably come with dependency configurations (granted, in early 2000s this isn't as mature as it is today) and some tests. My side projects basically all have tests, these tests are vital for picking up years later and for validation while developing.

For personal notes, I use this script which upon `$ diary` would create/open an entry for the current day in the appropriate folder with vim: https://github.com/Aperocky/diaryman/blob/master/diaryman.sh. Text files will last forever, it has some basic flavoring with markdown, but that's it. The folder where this is indexed is without a doubt the most valuable data on my computer, and it stretches back years.

I do occasionally take screenshots but never for reasons that author find screenshot to be useful for.


  > For software, side project or not, it should probably come with dependency configurations (granted, in early 2000s this isn't as mature as it is today) and some tests.
These may or may not help. Things have certainly changed in the past several years, but if we have learned anything, the "infinite memory of the internet" is anything but. Dependencies vanish and die all the time. So, while you may have a list of dependencies, if you don't have those actual dependencies locally with you, you may be out of luck. Even if the actual project still exists, the older versions you depend on may not.

I can't speak to others, but if you were to actively shelve a Java project, and were using Maven or relying on its infrastructure, I would clean out your local repository cache, rebuild and test the project, then snapshot the project directory and the repository cache. At least then you might have a solid chance of resuscitating the project later on if you needed too.


Or, point maven to a directory in your repo (`-DlocalRepository`) and include it in your git with lfs. Best of both worlds. Get to use dependency management and always have your dependencies around.


> These may or may not help. Things have certainly changed in the past several years, but if we have learned anything, the "infinite memory of the internet" is anything but. Dependencies vanish and die all the time. So, while you may have a list of dependencies, if you don't have those actual dependencies locally with you, you may be out of luck. Even if the actual project still exists, the older versions you depend on may not.

I say this problem impacts most of the development stacks out there, whenever you're dealing with a package manager:

  - Node and Javascript have npm
  - Python has pip
  - .NET has NuGet
  - Java has Maven/Gradle
  - PHP has Composer
  - Ruby has gem
And all of those have packages that could technically disappear, or you could have network issues and so on (when I build my container images, I sometimes even have apt fail, though very rarely). I think a safe bet is to run your own package proxy, like Sonatype Nexus: https://www.sonatype.com/products/nexus-repository (there's also JFrog Artifactory in this space, probably others too: https://jfrog.com/artifactory/)

This can improve build speeds because you refer to your own server for getting packages, the proxy will also pull packages that it doesn't have automatically, there are no rate limits to deal with (e.g. DockerHub pull limits vs the image being pulled once and stored in Nexus, unless changed) and you can also pretty easily see just how much stuff you're relying on.

The next step is to also use this server for publishing your own packages, which is suddenly easier - you can manage your own accounts, with nobody to tell you what you can/can't upload and how: you literally have all of the storage on the server at your disposal, redeploy as often as you want.

The only real downside to this is that you are indeed self-hosting it: you need updates, storage and all that. Well, maybe there's also the issue that using custom repositories downright sucks in some stacks - while npm supports something like --registry, I distinctly recall Ruby being a total pain in this regard in a container context (something about Bundler configuration, since it doesn't seem to support a command line parameter): https://help.sonatype.com/repomanager3/nexus-repository-admi...