But for all but the most extreme cases, it's sufficient to just keep all the values in memory until they fall out of your window. Even if you're getting 1000 requests/second, that's still only 300,000 values that you have to store.
There may be a way to implement this in perks/quantiles by adding another piece of metadata for the timestamp. There is a space cost for this, but it may be okay or even opt-in. Maybe I'll look into this soon.
Edit:
I do agree that if you're working with datasets that fit in memory, you're probably better off keeping all the samples to find your percentile and not using this package. In fact, perks will not compress for datasets under 500 values.
(Intuitively, a sliding window seems very hard/impossible -- how do you discard old events without keeping a complete record?)