Hacker News .hnnew | past | comments | ask | show | jobs | submit | iggldiggl's commentslogin

Also if your software for whatever reasons is using the original libjpeg in its modern (post classic version 6b) incarnation [1], right from version 7 onwards the new (and still current) maintainer switched the algorithm for chroma up-/downsampling from classic pixel interpolation to DCT-based scaling, claiming it's mathematically more beautiful and (apart from the unavoidable information loss on the first downscaling) perfectly reversible [2].

The problem with that approach however is that DCT-scaling is block-based, so for classic 4:2:0 subsampling, each 16x16 chroma block in the original image is now individually being downscaled to 8x8, and perhaps more importantly, later-on individually being upscaled back to 16x16 on decompression.

Compared to classic image resizing algorithms (bilinear scaling or whatever), this block-based upscaling can and does introduce additional visual artefacts at the block boundaries, which, while somewhat subtle, are still large enough to be actually borderline visible even when not quite pixel-peeping. ([3] notes that the visual differences between libjpeg 6b/turbo and libjpeg 7-9 on image decompression are indeed of a borderline visible magnitude.)

I stumbled across this detail after having finally upgraded my image editing software [4] from the old freebie version I'd been using for years (it was included with a computer magazine at some point) to its current incarnation, which came with a libjpeg version upgrade under the hood. Not long afterwards I noticed that for quite a few images, the new version introduced some additional blockiness when decoding JPEG images (also subsequently exacerbated by some particular post-processing steps I was doing on those images), and then I somehow stumbled across this article [3] which noted the change in chroma subsampling and provided the crucial clue to this riddle.

Thankfully, the developers of that image editor were (still are) very friendly and responsive and actually agreed to switch out the jpeg library to libjpeg-turbo, thereby resolving that issue. Likewise, luckily few other programs and operating systems seem to actually use modern libjpeg, usually preferring libjpeg-turbo or something else that continues using regular image scaling algorithms for chroma subsampling.

[1] Instead of libjpeg-turbo or whatever else is around these days.

[2] Which might be true in theory, but I tried de- and recompressing images in a loop with both libjpeg 6b and 9e, and didn't find a significant difference in the number of iterations required until the image converged to a stable compression result.

[3] https://informationsecurity.uibk.ac.at/pdfs/BHB2022_IHMMSEC....

[4] PhotoLine


I poked at the app, which surprisingly enough isn't even obfuscated, and as far as I can tell, it's mainly relying on Play Integrity's verdict. I didn't investigate it in detail though, so I don't know absolutely sure if that's really all or whether they're also running some additional custom checks, and I also don't know which integrity level they're requiring.


Sarcasm aside, it depends on whether your employer has configured Entra to allow classic TOTP (in which case Microsoft will try to push its own app as the default option, but you can in fact use anything that supports TOTP if you insist), respectively has set the option to only allow Microsoft's proprietary 2FA, which only works with the Microsoft app.


> That is an issue with the capabilities the os exposes to you. The answer to every security issue not "add a backdoor".

Problem is, I strongly suspect we'd still be having the same discussion even if we were talking about "allow the user direct access to all files*" instead of "allow the user full root rights".

Because while some of those missing capabilities are "simply" a matter of it being too much effort to provide a dedicated capability for each and every niche use case (though that once again raises the question as to whether you prefer failing open, i.e. provide root as an ultimate fallback solution, or fail closed), with file access I guess that this was very much an intentional design decision.


Also time references in stories would become much more cumbersome, and never mind how you'd handle fictional locations…


If you can read German, somebody also wrote a whole dissertation on the subject of the CD's development history:

https://publications.rwth-aachen.de/record/95066


One of the cool/obscure things is that the center hole exactly matches a then available coin because it made the development and testing easier.


A transcript for a half-hour radio comedy show with some formatting takes up about 60 kB. The English Wikipedia page for Monty Python is about 130 kB in pure UTF-8 text and the actual HTML page takes up around around 660 kB (plus/minus, depending on which Wikipedia theme exactly you use).

So large, text-heavy pages don't seem too unlikely to exceed 250 kB, especially if they also include some amount of formatting that's more substantial than just a minimal bunch of <p> tags.


> What most of these people do not seem to get is that proper sandboxing does not only protect against attacks from the inside (rogue developer, supply chain attack), but also from the outside.

The problem is that strict file system sandboxing in particular also breaks a substantial number of workflows that can't be modelled as 'only ever open the exact file the user explicitly' picked. (Any multi-file file formats are particularly affected, as well as any UI workflows that don't integrate well with strictly having to use the OS file picker.)

So you need some escape hatch for optionally allowing access to larger swathes of the file system, or even really everything as before, but that in turn then risks being abused again by malicious actors. And then…?

Plus things like Android's implementation initially using an API completely incompatible with classical file APIs, as well as causing some noticeable performance overhead even today if you need more than simply accessing the occasional single file here and there.


I think had the problem is that the toolbox we can deploy to solve these problems is so empty.

For example, it’s useful for a music player with metadata editing features to have read/write access to the whole filesystem, but that constitutes a significant risk since all we can do is wholesale allow or prevent access to the whole filesystem. What if the system could allow it to access only music files, though? That’d scope the risk back down to almost nothing while also allowing the music player to do its job.

This is the kind of thing I’ve been getting at in the other replies. Nobody has really sat down and given system level security controls a deep rethink.


I think Apple's implementation in macOS is the only one that offers some slightly more advanced features, but even those don't get you that far

(Some sort of way to store permission references with relatives paths in a file, but which most probably wouldn't work with files being exchanged cross-platform, and other than that mainly being able to get automatic access to 'related' files, i.e. same file name, but a differing extension – that solves some sidecar files, like video subtitles, or certain kinds of georeferenced images, but large capability gaps still remain – even the video subtitle example stops working if the file name is no longer 100 % the same, like if you have multiple subtitle files for differing languages, where VLC for example supports prefix-matching the video file name with the subtitle files.)

And while your idea does have its merits, I fear that pretty soon you still hit a point where you can't sensibly and succinctly display those more complex types of permissions in the UI.


> And while your idea does have its merits, I fear that pretty soon you still hit a point where you can't sensibly and succinctly display those more complex types of permissions in the UI.

I could very well be wrong, but my inclination is that it's possible, but it's going to take the sort of fundamentals R&D that desktop operating systems haven't seen in decades. It can't just be tacked on, everything to be designed with this new system in mind.


See https://www.thecontactpatch.com/ for some interesting reading material related to that.


Those are for the opposite problem – detecting defective trains (overweight respectively otherwise faulty weight distribution as well as wheel flats).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: