HN2new | past | comments | ask | show | jobs | submitlogin

The CDN should pass through whatever you give it. If your servers return gzip content for a request, the CDN will cache it.

It sounds like you want a configurable distributed web server instead of a CDN proxy cache.



That's exactly how MaxCDN and SimpleCDN work, but Cloudfront and Rackspace Cloud don't pass through requests from your own server; rather, they require you to manually upload the files you want to serve via CDN in advance.


But Cloudfront now supports custom origins which I believe allows gzip by forwarding the Accept-Encoding header and caching different versions of the file depending on the value of that header.


This is great to know, but it seems like a lot of unnecessary work on our part to make it work. That means for each file we have to manually create and upload a compressed version, ensure that the compressed and uncompressed versions are always in sync, and properly set up custom origins for the files.

Instead, Amazon's front end should just check the incoming accept-encoding header and automatically compress as needed.



He wants the transfer to be compressed, which is based on the capabilities of the browser, not the content. Admittedly, the names of the headers that control all this kind of make it confusing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: