memcached is a poor-man's distributed shared memory system for clusters. We've been layering on top of it to try and fix deficiencies with things like client-side caching, persistence in case memcached drops objects etc.
But I was curious if other people had similar problems and how they were solving them.
If you're trying to "fix" memcached's dropping of objects after a time, you shouldn't be using memcached. You should be using something actually designed to persist things. Turning a cache into a persistent store, or vice versa, is a dangerous game, and of the two cache -> persistent store is the worse.
Sounds like your life would be simpler with Redis if you are using memcached to take state about a computation. Atomic operations on lists and persistence are two good points about it in this context.
I think we'd have to add some sort of client side caching on top of it so that we're not fetching the same objects over and over. Tend to saturate our network if we don't do that.
The other thing is that I think we'd have to add some sort of object migration so that when redis servers come up or go down we could re-balance where things are stored.
I have tried to use hadoop at lizten.in, but it seems overkill specially for small deployments like the basic slicehost option.
I tried memcached and sure, it is a good fix for some page rendering issues, but for elements that you want to cache forever (or a relatively long time), I prefer to create a blob table in mysql and store the entries as if it was a key/value pair. I am sure there are more sophisticated persistent datastores, but I guess the idea is the same.
Maybe with some high performance storage, such as SSD's and something like berkeleydb it would be feasible to keep a relatively big cache.
But I was curious if other people had similar problems and how they were solving them.