Compression optimizes for wall clock time which is (roughly speaking) sum of time it takes to get the compressed data and time it takes to decompress it. Downloads over the network are almost always bottlenecked on the network speed, so the heavier compression you have the better. However, when loading data from the local disk, the I/O can be so fast that the decompression time becomes the bottleneck and slows down loading.
Because of that optimization for download time and startup time end up optimizing for different things.
With loading speeds so different in these two cases, I wonder if the best solution would be to recompress the data with a different compression level during download and installation. For example, download textures compressed with a good but slow and non-random-access image compression format, and then convert them to a lighter GPU-friendly compression format locally.
Lossless transfer compression is good to have, but it's not enough. Lossy compression can give you over 10 times smaller files, but it's format-specific and needs to be carefully applied by the developer.
Because of that optimization for download time and startup time end up optimizing for different things.
With loading speeds so different in these two cases, I wonder if the best solution would be to recompress the data with a different compression level during download and installation. For example, download textures compressed with a good but slow and non-random-access image compression format, and then convert them to a lighter GPU-friendly compression format locally.