Class: ActiveSupport::Cache::RedisCacheStore
Relationships & Source Files | |
Super Chains via Extension / Inclusion / Inheritance | |
Class Chain:
self,
Store
|
|
Instance Chain:
self,
Strategy::LocalCache ,
Store
|
|
Inherits: |
ActiveSupport::Cache::Store
|
Defined in: | activesupport/lib/active_support/cache/redis_cache_store.rb |
Overview
Redis Cache Store
Deployment note: Take care to use a dedicated Redis cache rather than pointing this at a persistent Redis server (for example, one used as an Active Job queue). Redis won’t cope well with mixed usage patterns and it won’t expire cache entries by default.
Redis cache server setup guide: redis.io/topics/lru-cache
-
Supports vanilla Redis, hiredis, and
Redis::Distributed
. -
Supports Memcached-like sharding across Redises with
Redis::Distributed
. -
Fault tolerant. If the Redis server is unavailable, no exceptions are raised. Cache fetches are all misses and writes are dropped.
-
Local cache. Hot in-memory primary cache within block/middleware scope.
-
#read_multi and
write_multi
support for Redis mget/mset. UseRedis::Distributed
4.0.1+ for distributed mget support. -
#delete_matched support for Redis KEYS globs.
Constant Summary
-
DEFAULT_ERROR_HANDLER =
# File 'activesupport/lib/active_support/cache/redis_cache_store.rb', line 47#=> (method:, returning:, exception:) do if logger logger.error { "RedisCacheStore: #{method} failed, returned #{returning.inspect}: #{exception.class}: #{exception.}" } end ActiveSupport.error_reporter&.report( exception, severity: :warning, source: "redis_cache_store.active_support", ) end
-
DEFAULT_REDIS_OPTIONS =
# File 'activesupport/lib/active_support/cache/redis_cache_store.rb', line 41{ connect_timeout: 1, read_timeout: 1, write_timeout: 1, }
-
MAX_KEY_BYTESIZE =
Keys are truncated with the Active Support digest if they exceed 1kB
1024
-
SCAN_BATCH_SIZE =
private
The maximum number of entries to receive per SCAN call.
1000
Store
- Inherited
Class Attribute Summary
-
.supports_cache_versioning? ⇒ Boolean
readonly
Advertise cache versioning support.
Store
- Inherited
Class Method Summary
-
.new(error_handler: DEFAULT_ERROR_HANDLER, **redis_options) ⇒ RedisCacheStore
constructor
Creates a new Redis cache store.
- .build_redis_client(**redis_options) private
- .build_redis_distributed_client(urls:, **redis_options) private
-
.build_redis(redis: nil, url: nil, **redis_options)
Internal use only
Factory method to create a new Redis instance.
Store
- Inherited
.new | Creates a new cache. |
.retrieve_pool_options |
Instance Attribute Summary
- #max_key_bytesize readonly
- #redis readonly
- #supports_expire_nx? ⇒ Boolean readonly private
Store
- Inherited
Instance Method Summary
-
#cleanup(options = nil)
::ActiveSupport::Cache
Store API implementation. -
#clear(options = nil)
Clear the entire cache on all Redis servers.
-
#decrement(name, amount = 1, options = nil)
Decrement a cached integer value using the Redis decrby atomic operator.
-
#delete_matched(matcher, options = nil)
::ActiveSupport::Cache
Store API implementation. -
#increment(name, amount = 1, options = nil)
Increment a cached integer value using the Redis incrby atomic operator.
- #inspect
-
#read_multi(*names)
::ActiveSupport::Cache
Store API implementation. -
#stats
Get info from redis servers.
- #change_counter(key, amount, options) private
-
#delete_entry(key, **options)
private
Delete an entry from the cache.
-
#delete_multi_entries(entries, **_options)
private
Deletes multiple entries in the cache.
- #deserialize_entry(payload, raw: false) private
- #failsafe(method, returning: nil) private
-
#normalize_key(key, options)
private
Truncate keys that exceed 1kB.
- #pipeline_entries(entries, &block) private
-
#read_entry(key, **options)
private
Store
provider interface: Read an entry from the cache. - #read_multi_entries(names, **options) private
- #read_serialized_entry(key, raw: false, **options) private
- #serialize_entries(entries, **options) private
- #serialize_entry(entry, raw: false, **options) private
- #truncate_key(key) private
-
#write_entry(key, entry, raw: false, **options)
private
Write an entry to the cache.
-
#write_multi_entries(entries, **options)
private
Nonstandard store provider API to write multiple values at once.
- #write_serialized_entry(key, payload, raw: false, unless_exist: false, expires_in: nil, race_condition_ttl: nil, pipeline: nil, **options) private
Strategy::LocalCache
- Included
#middleware | Middleware class can be inserted as a |
#with_local_cache | Use a local cache for the duration of block. |
#bypass_local_cache, #delete_entry, #local_cache, #local_cache_key, #read_multi_entries, #read_serialized_entry, #use_temporary_local_cache, #write_cache_value, #write_serialized_entry, #cleanup, #clear, #decrement, #delete_matched, #increment |
Store
- Inherited
#cleanup | Cleans up the cache by removing expired entries. |
#clear | Clears the entire cache. |
#decrement | Decrements an integer value in the cache. |
#delete | Deletes an entry in the cache. |
#delete_matched | Deletes all entries with keys matching the pattern. |
#delete_multi | Deletes multiple entries in the cache. |
#exist? | Returns |
#fetch | Fetches data from the cache, using the given key. |
#fetch_multi | Fetches data from the cache, using the given keys. |
#increment | Increments an integer value in the cache. |
#mute | Silences the logger within a block. |
#read | Reads data from the cache, using the given key. |
#read_multi | Reads multiple values at once from the cache. |
#silence, | |
#silence! | Silences the logger. |
#write | Writes the value to the cache with the key. |
#write_multi |
|
#_instrument, #default_serializer, | |
#delete_entry | Deletes an entry from the cache implementation. |
#delete_multi_entries | Deletes multiples entries in the cache implementation. |
#deserialize_entry, | |
#expanded_key | Expands key to be a consistent string value. |
#expanded_version, #get_entry_value, #handle_expired_entry, #handle_invalid_expires_in, #instrument, #instrument_multi, | |
#key_matcher | Adds the namespace defined in the options to a pattern designed to match keys. |
#merged_options | Merges the default options with ones specific to a method call. |
#namespace_key | Prefix the key with a namespace string: |
#normalize_key | Expands and namespaces the cache key. |
#normalize_options | Normalize aliased options to their canonical form. |
#normalize_version, | |
#read_entry | Reads an entry from the cache implementation. |
#read_multi_entries | Reads multiple entries from the cache implementation. |
#save_block_result_to_cache, #serialize_entry, #validate_options, | |
#write_entry | Writes an entry to the cache implementation. |
#write_multi_entries | Writes multiple entries to the cache implementation. |
#new_entry |
Constructor Details
.new(error_handler: DEFAULT_ERROR_HANDLER, **redis_options) ⇒ RedisCacheStore
Creates a new Redis cache store.
There are four ways to provide the Redis client used by the cache: the :redis
param can be a Redis instance or a block that returns a Redis instance, or the :url
param can be a string or an array of strings which will be used to create a Redis instance or a Redis::Distributed
instance.
Option Class Result
:redis Proc -> [:redis].call
:redis Object -> [:redis]
:url String -> Redis.new(url: …)
:url Array -> Redis::Distributed.new([{ url: … }, { url: … }, …])
No namespace is set by default. Provide one if the Redis cache server is shared with other apps: namespace: 'myapp-cache'
.
Compression is enabled by default with a 1kB threshold, so cached values larger than 1kB are automatically compressed. Disable by passing compress: false
or change the threshold by passing compress_threshold: 4.kilobytes
.
No expiry is set on cache entries by default. Redis is expected to be configured with an eviction policy that automatically deletes least-recently or -frequently used keys when it reaches max memory. See redis.io/topics/lru-cache for cache server setup.
Race condition TTL is not set by default. This can be used to avoid “thundering herd” cache writes when hot cache entries are expired. See Store#fetch for more.
Setting skip_nil: true
will not cache nil results:
cache.fetch('foo') { nil }
cache.fetch('bar', skip_nil: true) { nil }
cache.exist?('foo') # => true
cache.exist?('bar') # => false
# File 'activesupport/lib/active_support/cache/redis_cache_store.rb', line 149
def initialize(error_handler: DEFAULT_ERROR_HANDLER, ** ) = .extract!(*UNIVERSAL_OPTIONS) if = self.class.send(:, ) @redis = ::ConnectionPool.new( ) { self.class.build_redis(** ) } else @redis = self.class.build_redis(** ) end @max_key_bytesize = MAX_KEY_BYTESIZE @error_handler = error_handler super( ) end
Class Attribute Details
.supports_cache_versioning? ⇒ Boolean
(readonly)
Advertise cache versioning support.
# File 'activesupport/lib/active_support/cache/redis_cache_store.rb', line 63
def self.supports_cache_versioning? true end
Class Method Details
.build_redis(redis: nil, url: nil, **redis_options)
Factory method to create a new Redis instance.
Handles four options: :redis
block, :redis
instance, single :url
string, and multiple :url
strings.
Option Class Result
:redis Proc -> [:redis].call
:redis Object -> [:redis]
:url String -> Redis.new(url: …)
:url Array -> Redis::Distributed.new([{ url: … }, { url: … }, …])
# File 'activesupport/lib/active_support/cache/redis_cache_store.rb', line 81
def build_redis(redis: nil, url: nil, ** ) # :nodoc: urls = Array(url) if redis.is_a?(Proc) redis.call elsif redis redis elsif urls.size > 1 build_redis_distributed_client(urls: urls, ** ) elsif urls.empty? build_redis_client(** ) else build_redis_client(url: urls.first, ** ) end end
.build_redis_client(**redis_options) (private)
[ GitHub ]# File 'activesupport/lib/active_support/cache/redis_cache_store.rb', line 104
def build_redis_client(** ) ::Redis.new(DEFAULT_REDIS_OPTIONS.merge( )) end
.build_redis_distributed_client(urls:, **redis_options) (private)
[ GitHub ]# File 'activesupport/lib/active_support/cache/redis_cache_store.rb', line 98
def build_redis_distributed_client(urls:, ** ) ::Redis::Distributed.new([], DEFAULT_REDIS_OPTIONS.merge( )).tap do |dist| urls.each { |u| dist.add_node url: u } end end
Instance Attribute Details
#max_key_bytesize (readonly)
[ GitHub ]# File 'activesupport/lib/active_support/cache/redis_cache_store.rb', line 109
attr_reader :max_key_bytesize
#redis (readonly)
[ GitHub ]# File 'activesupport/lib/active_support/cache/redis_cache_store.rb', line 110
attr_reader :redis
#supports_expire_nx? ⇒ Boolean
(readonly, private)
[ GitHub ]
# File 'activesupport/lib/active_support/cache/redis_cache_store.rb', line 477
def supports_expire_nx? return @supports_expire_nx if defined?(@supports_expire_nx) redis_versions = redis.then { |c| Array.wrap(c.info("server")).pluck("redis_version") } @supports_expire_nx = redis_versions.all? { |v| Gem::Version.new(v) >= Gem::Version.new("7.0.0") } end
Instance Method Details
#change_counter(key, amount, options) (private)
[ GitHub ]# File 'activesupport/lib/active_support/cache/redis_cache_store.rb', line 450
def change_counter(key, amount, ) redis.then do |c| c = c.node_for(key) if c.is_a?(Redis::Distributed) expires_in = [:expires_in] if expires_in if supports_expire_nx? count, _ = c.pipelined do |pipeline| pipeline.incrby(key, amount) pipeline.call(:expire, key, expires_in.to_i, "NX") end else count, ttl = c.pipelined do |pipeline| pipeline.incrby(key, amount) pipeline.ttl(key) end c.expire(key, expires_in.to_i) if ttl < 0 end else count = c.incrby(key, amount) end count end end
#cleanup(options = nil)
::ActiveSupport::Cache
Store API implementation.
Removes expired entries. Handled natively by Redis least-recently-/ least-frequently-used expiry, so manual cleanup is not supported.
# File 'activesupport/lib/active_support/cache/redis_cache_store.rb', line 282
def cleanup( = nil) super end
#clear(options = nil)
Clear the entire cache on all Redis servers. Safe to use on shared servers if the cache is namespaced.
Failsafe: Raises errors.
# File 'activesupport/lib/active_support/cache/redis_cache_store.rb', line 290
def clear( = nil) failsafe :clear do if namespace = ( )[:namespace] delete_matched "*", namespace: namespace else redis.then { |c| c.flushdb } end end end
#decrement(name, amount = 1, options = nil)
Decrement a cached integer value using the Redis decrby atomic operator. Returns the updated value.
If the key is unset or has expired, it will be set to -amount
:
cache.decrement("foo") # => -1
To set a specific value, call #write
passing raw: true
:
cache.write("baz", 5, raw: true)
cache.decrement("baz") # => 4
Decrementing a non-numeric value, or a value written without raw: true
, will fail and return nil
.
Failsafe: Raises errors.
# File 'activesupport/lib/active_support/cache/redis_cache_store.rb', line 267
def decrement(name, amount = 1, = nil) = ( ) key = normalize_key(name, ) instrument :decrement, key, amount: amount do failsafe :decrement do change_counter(key, -amount, ) end end end
#delete_entry(key, **options) (private)
Delete an entry from the cache.
#delete_matched(matcher, options = nil)
::ActiveSupport::Cache
Store API implementation.
Supports Redis KEYS glob patterns:
h?llo matches hello, hallo and hxllo
h*llo matches hllo and heeeello
h[ae]llo matches hello and hallo, but not hillo
h[^e]llo matches hallo, hbllo, ... but not hello
h[a-b]llo matches hallo and hbllo
Use \ to escape special characters if you want to match them verbatim.
See redis.io/commands/KEYS for more.
Failsafe: Raises errors.
# File 'activesupport/lib/active_support/cache/redis_cache_store.rb', line 201
def delete_matched(matcher, = nil) unless String === matcher raise ArgumentError, "Only Redis glob strings are supported: #{matcher.inspect}" end pattern = namespace_key(matcher, ) instrument :delete_matched, pattern do redis.then do |c| cursor = "0" # Fetch keys in batches using SCAN to avoid blocking the Redis server. nodes = c.respond_to?(:nodes) ? c.nodes : [c] nodes.each do |node| begin cursor, keys = node.scan(cursor, match: pattern, count: SCAN_BATCH_SIZE) node.del(*keys) unless keys.empty? end until cursor == "0" end end end end
#delete_multi_entries(entries, **_options) (private)
Deletes multiple entries in the cache. Returns the number of entries deleted.
#deserialize_entry(payload, raw: false) (private)
[ GitHub ]#failsafe(method, returning: nil) (private)
[ GitHub ]# File 'activesupport/lib/active_support/cache/redis_cache_store.rb', line 484
def failsafe(method, returning: nil) yield rescue ::Redis::BaseError => error @error_handler&.call(method: method, exception: error, returning: returning) returning end
#increment(name, amount = 1, options = nil)
Increment a cached integer value using the Redis incrby atomic operator. Returns the updated value.
If the key is unset or has expired, it will be set to amount
:
cache.increment("foo") # => 1
cache.increment("bar", 100) # => 100
To set a specific value, call #write
passing raw: true
:
cache.write("baz", 5, raw: true)
cache.increment("baz") # => 6
Incrementing a non-numeric value, or a value written without raw: true
, will fail and return nil
.
Failsafe: Raises errors.
# File 'activesupport/lib/active_support/cache/redis_cache_store.rb', line 240
def increment(name, amount = 1, = nil) = ( ) key = normalize_key(name, ) instrument :increment, key, amount: amount do failsafe :increment do change_counter(key, amount, ) end end end
#inspect
[ GitHub ]# File 'activesupport/lib/active_support/cache/redis_cache_store.rb', line 164
def inspect "#<#{self.class} options=#{ .inspect} redis=#{redis.inspect}>" end
#normalize_key(key, options) (private)
Truncate keys that exceed 1kB.
# File 'activesupport/lib/active_support/cache/redis_cache_store.rb', line 414
def normalize_key(key, ) truncate_key super&.b end
#pipeline_entries(entries, &block) (private)
[ GitHub ]# File 'activesupport/lib/active_support/cache/redis_cache_store.rb', line 306
def pipeline_entries(entries, &block) redis.then { |c| if c.is_a?(Redis::Distributed) entries.group_by { |k, _v| c.node_for(k) }.each do |node, sub_entries| node.pipelined { |pipe| yield(pipe, sub_entries) } end else c.pipelined { |pipe| yield(pipe, entries) } end } end
#read_entry(key, **options) (private)
Store
provider interface: Read an entry from the cache.
# File 'activesupport/lib/active_support/cache/redis_cache_store.rb', line 320
def read_entry(key, ** ) deserialize_entry(read_serialized_entry(key, ** ), ** ) end
#read_multi(*names)
::ActiveSupport::Cache
Store API implementation.
Read multiple values at once. Returns a hash of requested keys -> fetched values.
# File 'activesupport/lib/active_support/cache/redis_cache_store.rb', line 172
def read_multi(*names) return {} if names.empty? = names. = ( ) keys = names.map { |name| normalize_key(name, ) } instrument_multi(:read_multi, keys, ) do |payload| read_multi_entries(names, ** ).tap do |results| payload[:hits] = results.keys.map { |name| normalize_key(name, ) } end end end
#read_multi_entries(names, **options) (private)
[ GitHub ]# File 'activesupport/lib/active_support/cache/redis_cache_store.rb', line 330
def read_multi_entries(names, ** ) = ( ) return {} if names == [] raw = &.fetch(:raw, false) keys = names.map { |name| normalize_key(name, ) } values = failsafe(:read_multi_entries, returning: {}) do redis.then { |c| c.mget(*keys) } end names.zip(values).each_with_object({}) do |(name, value), results| if value entry = deserialize_entry(value, raw: raw) unless entry.nil? || entry.expired? || entry.mismatched?(normalize_version(name, )) begin results[name] = entry.value rescue DeserializationError end end end end end
#read_serialized_entry(key, raw: false, **options) (private)
[ GitHub ]# File 'activesupport/lib/active_support/cache/redis_cache_store.rb', line 324
def read_serialized_entry(key, raw: false, ** ) failsafe :read_entry do redis.then { |c| c.get(key) } end end
#serialize_entries(entries, **options) (private)
[ GitHub ]# File 'activesupport/lib/active_support/cache/redis_cache_store.rb', line 444
def serialize_entries(entries, ** ) entries.transform_values do |entry| serialize_entry(entry, ** ) end end
#serialize_entry(entry, raw: false, **options) (private)
[ GitHub ]# File 'activesupport/lib/active_support/cache/redis_cache_store.rb', line 436
def serialize_entry(entry, raw: false, ** ) if raw entry.value.to_s else super(entry, raw: raw, ** ) end end
#stats
Get info from redis servers.
# File 'activesupport/lib/active_support/cache/redis_cache_store.rb', line 301
def stats redis.then { |c| c.info } end
#truncate_key(key) (private)
[ GitHub ]# File 'activesupport/lib/active_support/cache/redis_cache_store.rb', line 418
def truncate_key(key) if key && key.bytesize > max_key_bytesize suffix = ":hash:#{ActiveSupport::Digest.hexdigest(key)}" truncate_at = max_key_bytesize - suffix.bytesize "#{key.byteslice(0, truncate_at)}#{suffix}" else key end end
#write_entry(key, entry, raw: false, **options) (private)
Write an entry to the cache.
Requires Redis 2.6.12+ for extended SET options.
# File 'activesupport/lib/active_support/cache/redis_cache_store.rb', line 357
def write_entry(key, entry, raw: false, ** ) write_serialized_entry(key, serialize_entry(entry, raw: raw, ** ), raw: raw, ** ) end
#write_multi_entries(entries, **options) (private)
Nonstandard store provider API to write multiple values at once.
# File 'activesupport/lib/active_support/cache/redis_cache_store.rb', line 399
def write_multi_entries(entries, ** ) return if entries.empty? failsafe :write_multi_entries do pipeline_entries(entries) do |pipeline, sharded_entries| = .dup [:pipeline] = pipeline sharded_entries.each do |key, entry| write_entry key, entry, ** end end end end
#write_serialized_entry(key, payload, raw: false, unless_exist: false, expires_in: nil, race_condition_ttl: nil, pipeline: nil, **options) (private)
[ GitHub ]# File 'activesupport/lib/active_support/cache/redis_cache_store.rb', line 361
def write_serialized_entry(key, payload, raw: false, unless_exist: false, expires_in: nil, race_condition_ttl: nil, pipeline: nil, ** ) # If race condition TTL is in use, ensure that cache entries # stick around a bit longer after they would have expired # so we can purposefully serve stale entries. if race_condition_ttl && expires_in && expires_in > 0 && !raw expires_in += 5.minutes end modifiers = {} if unless_exist || expires_in modifiers[:nx] = unless_exist modifiers[:px] = (1000 * expires_in.to_f).ceil if expires_in end if pipeline pipeline.set(key, payload, **modifiers) else failsafe :write_entry, returning: nil do redis.then { |c| !!c.set(key, payload, **modifiers) } end end end