crisp-cache
A crispy fresh cache that will use updated data where it can, but can use a stale entry if need be - useful for high throughput applications that want to avoid cache-slams and blocking.
crisp-cache is now v1.x, tested, and stable.
This cache is for high throughput applications where cache data may become stale before being invalidated. It adds a state to a cache entry - Valid, [Stale], and Expired. This allows the program to ask for a value before the data is evicted from the cache. If the data is stale, the cache will return the stale data, but asynchronously re-fetch data to ensure data stays available. A locking mechanism is also provided so when a cache misses, data will only be retrieved once.
This project sponsored in part by:
- Empowering the next generation, aerisweather.com
Example
var CrispCache = ;var data = hello: "world" foo: "bar" arr: 1 2 3 hash: key: "value" nested: 4 5 6; { return ;} crispCacheBasic = fetcher: fetcher defaultStaleTtl: 300 defaultExpiresTtl: 500 staleCheckInterval: 100;crispCacheBasic; crispCacheBasic;//Wait any amount of time crispCacheBasic;
Mentions
Usage
new CrispCache({options})
Crisp Cache is instantiated because it holds config for many of it's methods.
Option | Type | Default | Description |
---|---|---|---|
fetcher |
(callable)* | null | A method to call when we need to update a cache entry, should have signature: function(key, callback(err, value, options))[1] |
defaultStaleTtl |
(integer, ms) | 300000 |
How long the cache entry is valid before becoming stale. |
staleTtlVariance |
(integer, ms) | 0 |
How many ms to vary the staleTtl (+/-, to prevent cache slams) |
staleCheckInterval |
(integer, ms) | 0 |
If >0, how often to check for stale keys and re-fetch |
defaultExpiresTtl |
(integer, ms) | 0 |
If >0, cache entries that are older than this time will be deleted |
expiresTtlVariance |
(integer, ms) | 0 |
How many ms to vary the expiresTtl (+/-, to prevent cache slams) |
evictCheckInterval |
(integer, ms) | 0 |
If >0, will check for expired cache entries and delete them from the cache |
ttlVariance |
(integer, ms) | 0 |
(Alias for other variance options) How many ms to vary the staleTtl and expiresTtl (+/-, to prevent cache slams) |
maxSize |
(integer) | null |
Adds a max size for the cache, when elements are added a size is needed. When the cache gets too big LRU purging occurs.[2] |
emitEvents |
(boolean) | true |
Enable event emission, see 'Event' section |
events |
(Object) | {} | A list of callbacks for events, keyed by the event name. Ex. { fetch: function(fetchInfo) { console.log(fetchInfo.key); } } will log each key that is fetched from the original data source. |
Notes:
[1] The fetcher callback's options are the same as set()
below. This allows indivudual keys to have different settings.
[2] maxSize is most effective when combined with the size
option when individual keys are set. See the below methods for more information.
get(key, [options], callback)
This will try and get key
(a string) from the cache. By default if the key doesn't exist, the cache will call the configured fetcher
to get the value. A lock is also set on the key while the value is retrieved. When the value is retrieved it is saved in the cache and used to call callback. Other requests to get this key from the cache are also resolved.
Option | Type | Default | Description |
---|---|---|---|
skipFetch |
(boolean) | false |
If true, will not try and fetch value if it doesn't exist in the cache. |
forceFetch |
(boolean) | false |
If true, will always refetch from the configured fetcher and not use the cache. |
set(key, value, [options], callback)
Set a value to the cache. Will call callback
(an error first callback) with a true/false for success when done.
Option | Type | Default | Description |
---|---|---|---|
staleTtl |
(integer, ms) | crispCache.defaultStaleTtl |
How long the cache entry is valid before becoming stale. |
expiresTtl |
(integer, ms) | crispCache.defaultExpiresTtl |
If >0, cache entries that are older than this time will be deleted |
size |
(integer) | 1 |
Required when maxSize is set on the cache, specifies the size for this cache entry. |
del(key, [callback])
Removes the provided key
(a string) from the cache, will call callback
(an error first callback) when the delete is done.
getUsage([options])
Returns some basic usage when using maxSize/LRU capabilities.
Option | Type | Default | Description |
---|---|---|---|
keysLimit |
(integer) | 0 | Limit the returned keys array to this value. None by default (fastest) |
Returns: An object of the current cache state. Also returns a sorted keys array. The keys are sorted by size when LRU is enabled, otherwise they are in alphabetical order.
get: set: // The total number of keys in the cache (even expired ones) keys: ...
resetUsage()
Reset usage stats back to zero. Subsequent calls to getUsage()
will only reflect activity since the last time resetUsage()
was called. Stats like size
, maxSize
, count
, and keys
aren't reset since those are derived from options or cached data.
CrispCache.wrap(originalFn, [options])
Wraps an asynchronous function in a CrispCache cache. This allows you to easily create cached versions of functions, which implement the same interface as the original functions.
For example:
var cachedReadFile = Cache; // cachedReadFile has the same signature as `fs.readFile`;
Option | Type | Default | Description |
---|---|---|---|
createKey |
(Function) | If omitted, a static key will be used for all calls to the cached function | Create a unique cache key using the function arguments. |
parseKey |
(Function) | Not required if createKey is omitted (in which case, the original function will receive no arguments besides callback). |
Convert a cache key into an array of function arguments. This should be the inverse of createKey (parseKey(createKey(key)) === key ). See: Events |
events |
(Object) | null | A list of callbacks for events, keyed by the event name. Ex. { fetch: function(fetchInfo) { console.log(fetchInfo.key); } } will log each key that is fetched from the original data source. |
... | All options accepted by the CrispCache constructor are also accepted by CrispCache.wrap . See new CrispCache() documentation. |
Note: Underlying cache instance is exposed via Cache.wrap()._cache. Be careful with this as the keys are computed with the provided createKey
function.
Advanced Usage
Events
Events are emitted by Crisp Cache via the emitEvents creation option, true by default. The following method emit events:
get
Event Name | Fired When | Arguments |
---|---|---|
hit |
The cache is hit | { key, entry } key being the requested key, entry is the found cache entry (entry.value may be helpful) |
miss |
There is a cache miss | { key } key being the requested key |
fetch
When fetch (the function provided to keep the cache up to date, configured at creation) is called internally, Crisp Cache will emit the following:
Event Name | Fired When | Arguments |
---|---|---|
fetch |
Right before fetch() is called |
{ key } key being the requested key |
fetchDone |
Once fetch returns with a value | { key, value, options } key being the requested key, value the value returned from fetch(), and options are the caching options returned. |
del
Event Name | Fired When | Arguments |
---|---|---|
delete |
An entry is deleted from the cache | { key, entry } key being the requested key, entry is the found cache entry (entry.value may be helpful) |
staleCheck
When the stale check is called (on the configured interval) the following events will be emitted:
Event Name | Fired When | Arguments |
---|---|---|
staleCheck |
Right before stale check loop is called | none |
staleCheckDone |
After the stale check is complete | [ key0, key1, etc. ] array of keys that were sent to the fetcher to be refetched. |
evictCheck
When the evict check is called (on the configured interval) the following events will be emitted:
Event Name | Fired When | Arguments |
---|---|---|
evictCheck |
Right before evict check loop is called | none |
evictCheckDone |
After the evict check is complete | { key: cacheObj, key2: cacheObj, etc. } a cache like object of keys and cache objects that were evicted from the cache. |
Dynamic TTLs
TTLs can be set on a per-item basis in the fetch() callable provided to Crisp Cache.
Lets say we want to create a for data we know expires every minute (60,000 ms). Our data source will provide how long ago each record was created. We can dynamically set our TTL so we are never serving bad data.
var CrispCache = ; var MAX_AGE = 60000;var data = a: name: "Aaron" createdAgo: 12000 b: name: "Betsy" createdAgo: 24000 c: name: "Charlie" createdAgo: 35000 ; { var record = datakey; if record var timeLeft = MAX_AGE - record; return ; else return ; } crispCacheBasic = fetcher: fetcher; crispCacheBasic;
What about stale times?
The previous example is great, but can we be smarter about how we fetch data?
If we want a high throughput application, we can ensure users of the cache are getting fast results by using a stale ttl in accordance with expires.
Same MAX_TIME and data from above example { var record = datakey; if record var staleTime = MAX_AGE - record; var expiresTime = staleTime + 10000 return ; else return ; } crispCacheBasic = fetcher: fetcher staleCheckInterval: 5000 //Check for stale records every 5 seconds; crispCacheBasic;
maxSize and LRU
If a maxCache
option is provided a Least Recently Used (LRU) module is loaded to handle evicting cache entries that haven't been touched in a while. This helps us maintain a maxSize
for the cache.
We can create and use a new cache using maxSize
:
var crispCacheBasic = fetcher: fetcher maxSize: 10; // Call the following series, taking a small libertiescrispCacheBasic;crispCacheBasic;crispCacheBasic;
Will result in the cache containing just the testC
entry. The testA
entry was added, then the testB
entry. These are both held in cache because their sizes meet the maxSize
of 10
but don't exceed it yet. When testC
is added however, the cache finds that testA
is the oldest and removes it. Seeing that the cache is still too large (testC
's 5 + testB
's 8 > our maxSize
of 10) it removes testB
too, leaving us with just testC
in the cache.
Error Handling
CrispCache handles errors returned by the fetcher differently, depending on the state of your cache. The intent of this behavior to smooth out hiccups in flaky asynchronous services, using a valid cached value whenever possible.
While your cache is empty, fetcher errors will be propagated:
var cache = { ; }; cache;
While your cache is active or stale, fetcher errors will be ignored, and the last available value will be used:
var i = 0;var cache = { // Return a value, on the first request if i === 0 i++; return ; // Throw errors, after the first request ; } defaultStaleTtl: 1000 * 60 defaultExpiresTtl: 1000 * 60 * 5; cache; //...anytime within the next 5 minutes...cache;
While your cache is expired, fetcher errors will be propagated:
var i = 0;var cache = { // Return a value, on the first request if i === 0 i++; return ; // Throw errors, after the first request ; } defaultStaleTtl: 1000 * 60 defaultExpiresTtl: 1000 * 60 * 5; cache; // ...5 minutes later...cache;
Caching errors
If you want all errors to be propagated, you could wrap CrispCache to cache errors like so:
function asyncFn(key, cb) {
// ...
}
var cache = new CrispCache({
fetcher: function(key, cb) {
asyncFn(key, function(err, val) {
// Cache the error, as though it were a value
if (err) {
return cb(null, err);
}
cb(null, val);
});
},
getOptions: function(val) {
// Only cache errors for 30 seconds
if (val instanceof Error) {
return {
expiresTtl: 1000 * 30
}
}
// Cache regular values for 5 minutes
return {
expiresTtl: 1000 * 60 * 5
};
}
});
function cachedAsyncFn(key, cb) {
cache.get(key, function(err, val) {
// Return error-type values as errors
if (val instanceof Error) {
return cb(val);
}
cb(err, val);
});
}
Roadmap
- Add different caching backends (memory is the only one supported now)