fetch(name, options = nil, &block) public

Fetches data from the cache, using the given key. If there is data in the cache with the given key, then that data is returned.

If there is no such data in the cache (a cache miss), then nil will be returned. However, if a block has been passed, that block will be passed the key and executed in the event of a cache miss. The return value of the block will be written to the cache under the given cache key, and that return value will be returned.

cache.write('today', 'Monday')
cache.fetch('today')  # => "Monday"

cache.fetch('city')   # => nil
cache.fetch('city') do
  'Duckburgh'
end
cache.fetch('city')   # => "Duckburgh"

Options

Internally, fetch calls read_entry, and calls write_entry on a cache miss. Thus, fetch supports the same options as #read and #write. Additionally, fetch supports the following options:

  • force: true - Forces a cache “miss,” meaning we treat the cache value as missing even if it’s present. Passing a block is required when force is true so this always results in a cache write.

    cache.write('today', 'Monday')
    cache.fetch('today', force: true) { 'Tuesday' } # => 'Tuesday'
    cache.fetch('today', force: true) # => ArgumentError
    

    The :force option is useful when you’re calling some other method to ask whether you should force a cache write. Otherwise, it’s clearer to just call write.

  • skip_nil: true - Prevents caching a nil result:

    cache.fetch('foo') { nil }
    cache.fetch('bar', skip_nil: true) { nil }
    cache.exist?('foo') # => true
    cache.exist?('bar') # => false
    
  • :race_condition_ttl - Specifies the number of seconds during which an expired value can be reused while a new value is being generated. This can be used to prevent race conditions when cache entries expire, by preventing multiple processes from simultaneously regenerating the same entry (also known as the dog pile effect).

    When a process encounters a cache entry that has expired less than :race_condition_ttl seconds ago, it will bump the expiration time by :race_condition_ttl seconds before generating a new value. During this extended time window, while the process generates a new value, other processes will continue to use the old value. After the first process writes the new value, other processes will then use it.

    If the first process errors out while generating a new value, another process can try to generate a new value after the extended time window has elapsed.

    # Set all values to expire after one minute.
    cache = ActiveSupport::Cache::MemoryStore.new(expires_in: 1.minute)
    
    cache.write('foo', 'original value')
    val_1 = nil
    val_2 = nil
    sleep 60
    
    Thread.new do
      val_1 = cache.fetch('foo', race_condition_ttl: 10.seconds) do
        sleep 1
        'new value 1'
      end
    end
    
    Thread.new do
      val_2 = cache.fetch('foo', race_condition_ttl: 10.seconds) do
        'new value 2'
      end
    end
    
    cache.fetch('foo') # => "original value"
    sleep 10 # First thread extended the life of cache by another 10 seconds
    cache.fetch('foo') # => "new value 1"
    val_1 # => "new value 1"
    val_2 # => "original value"
    

Dynamic Options

In some cases it may be necessary to dynamically compute options based on the cached value. To support this, an ActiveSupport::Cache::WriteOptions instance is passed as the second argument to the block. For example:

cache.fetch("authentication-token:#{user.id}") do |key, options|
  token = authenticate_to_service
  options.expires_at = token.expires_at
  token
end
Show source
Register or log in to add new notes.