Many end users are unaware that caching is not only making their experience better, it's making the experience possible. Users happen across the joys of caching when visiting favorite websites and seeing daily content rendered to them immediately. It's only until users have to wait 1-2 minutes for content updates from a source system that they realize the initial blazing speed was made possible by caching.

Problem Definition

For this example, assume that daily content is a mashup of several different 3rd party systems that integrate with a web site. Also assume that the 3rd party systems are unreliable and do not scale, or at least are not provisioned to scale from the endpoint accessing them. This represents the perfect problem scenario for caching. Here are some of the problem attributes:

  • Data is expensive to fetch
  • Data is not transactional (latency in updates are acceptable)
  • The data has a high request load, but an extremely consistent result
  • Data fetched from its source can have a high density of failure
  • Data which must restrict traffic to its source over time
  • Data that isn't critical if lost

The uses for caching certainly aren't reduced to only these attributes, but they do provide a good foundation for assessing the context to determine if the problem definition matches.

Solution

So, what is caching? In a nutshell, caching makes things faster. It helps not only with latency, but also high bandwidth scenarios.

From Wikipedia:

In computing, a cache (/ˈkæʃ/ kash)[1] is a component that transparently stores data so that future requests for that data can be served faster.

There are a number of different types of caches, including but not limited to:

  • CPU cache - The computer has an L2 cache built into it that is faster than accessing main memory
  • Browser cache - A web browser makes use of HTTP protocol defined caching terms to cache content (images, html, css, javascript, etc)
  • DNS cache - The computer will save DNS entries to preserve DNS loopup

The following will cover server side caching as it applies to building web applications. To help explain the solution, there are some key terms that need to be defined:

  • Entry - The value that is being cached. It can be a complex value in any format
  • Key - String-valued term used to uniquely identify a cache entry
  • TTL - Time To Life. This defines the duration until cache expiration. Sometimes referred to as max age
  • Eviction - The action of removing an entry from a cache

For a long time Memcached was considered the de facto caching service in a solution. Amazon ElastiCache has arguably better support for Memcached than Redis, likely because of embracing it first. This can be seen from the implementation. Redis, however, has come on strong in recent years and is praised for performance and more robust set of services than Memcached offers. Both of these services are out-of-process caching, which has a significant advantage over in-process:

  • In-process caching does not scale horizontally (E. g.: you cant implement a session store and scale beyond one node)
  • In-process caching has shadowing impacts which bloat memory used by a process. Especially if running several container processes on one machine (E. g.: RoR)
  • In-process caching doesn't support High Availability (HA) for replication & failover

Implementing a cache provider for a NoSQL database can work also, but some downsides exist for this option as well:

  • Eviction is not automatic. Values needed to be explicitly evicted because TTL was not part of the database service
  • Data compaction becomes an issue due to the nature of the short-lived entries
  • Performance can be reduced if a disk-based NoSQL database is being used
  • An upside to NoSQL, data service can provide a much larger capacity out of the box than Redis or Memcached

The lesson learned? Use the appropriate tool for the problem at hand. Here are some examples of using caching in code.

RoR example

Here is a snippet of some old (as in 4+ years old) Rails code written to take advantage of Dalli, a Memcached-based gem. The code uses a simple closure to refresh a cache entry with a TTL/max age of 3 minutes.

class TournamentController < ApplicationController 
    def draw result = cache(['TournamentDraw', params[:id]], :expires_in => 3.minutes) do 
         DrawHandler.new.get_draw_for_category(params[:id]) 
    end 
    render_results(result) 
end

Java Example

Below is a snippet that could be used with the Spring-Cache Abstraction to annotate your services to interweave caching.

@Cacheable(value="tourney",key="'TournamentDraw:'+#category) 
public JsonNode getDrawForCatagory(String category) throws Exception{
    DrawHandler drawHandler = new DrawHandler();
    return drawHandler.getDrawForCatagory(category); 
}

In the controller, REST service, or whatever the outer container implementation is, would simply call getDrawForCategory on the bean with the method defined above. Spring will take care of injecting the caching calls. It will be necessary to also include a definition in the spring context loader, ex:

<cache:annotation-driven />
<bean id="jedisFactory" class="org.springframework.data.redis.connection.jedis.JedisConnectionFactory" 
    p:host-name="localhost" p:port="6379"/>
<bean id="redisTemplate" class="org.springframework.data.redis.core.RedisTemplate">
    <property name="connectionFactory" ref="jedisFactory"/> 
    <property name="valueSerializer"> 
         <bean class="JsonNodeSerializer"/>
    </property> 
</bean>
<bean id="cacheManager" class="org.springframework.data.redis.cache.RedisCacheManager"> 
    <constructor-arg ref="redisTemplate"/> 
    <constructor-arg value="60"/> 
</bean>

Here is the code for the JsonNodeSerializer:

public class JsonNodeSerializer extends JacksonJsonRedisSerializer{ 
     public JsonNodeSerializer() { 
         super(JsonNode.class); 
     } 
}

ElastiCache

With the ways to implement the caching in code covered, it's time to talk about the dev/ops side of provisioning the needed services. ElastiCache is an Amazon service that allows for the management of caching services although it doesn't provide a way to programmatically access the services in the form of a Redis client. The developer will have to rely on open source tools for that. ElastiCache does, however, provide a way to configure replica groups, and a DNS abstraction for accessing the write endpoint. Some more terms:

  • Cluster - Users can only define one node in a cluster, which is fine since Redis relies on replication groups anyway
  • Replication Group - A group that maps one-2-one to a cluster and includes replicas and a primary endpoint
  • Primary Endpoint - A node with a DNS record pointing to it. This is the master write node for a replication group
  • Replica Node - A set of nodes that perform in read mode and replicate with the primary endpoint

Defining the process of how ElastiCache performs failover is best illustrated through a set of diagrams.

Initially the Replication Group is setup with one Primary Endpoint (e1) with two Replica Nodes (r1 & r2)

 

If for some reason e1 goes down

 

AWS will then create a new Primary Endpoint node (e2), cloned from one of the replicas (r1)

 

Once the new Primary Endpoint is up, AWS will alter the DNS for the Replication Group to point to the new Primary Endpoint (e2)

Redis Caching Considerations

The size of the Redis dataset is constrained by the size of available memory. *Gulp* That can be catastrophic if simply attempting to cache everything possible. There are some things to help out with this. Reducing the TTL to a smaller amount can ensure that the cache will clean itself. This is a simple exercise that can be very useful. Another item to keep in mind? If you are aggressively shadowing results to store individual items, as well as aggregates, and the aggregates are the only items included in responses, then you may want to consider reducing the non-aggregate entries.

Taking this a step further, only the keys are really required to exist in memory. If the system has cache volume restrictions, it is best to keep key terms short. Don't use intelligent keys or store values in the keys. With that in hand, it's possible to use Redis Virtual Memory to expand the value set via disk storage. Performance is, of course, a trade off in this scenario.

Summary

Overall, Redis provides developers with a robust, performant tool to leverage caching while AWS ElastiCache provides infrastructure services to help out with availability and management of these services. Used together, these services make it possible to build quality caching functionality that can take your solution to the next level.