How-to Guide: Caching with Redis-Namespace in Vets API
Caching is an essential component of Vets API to improve performance, reduce latency, and minimize load on backend resources and upstream service partners. When we send millions of requests to our downstream partners, it can often times strain their systems. To alleviate the strain, reduce latency, and enhance performance, we utilize caching as a strategic solution
Caching allows us to seamlessly integrate caching mechanisms into Vets API using the redis-namespace
gem, ensuring data retrieval efficiency, reduced latency, and minimized overload risks for our upstream partners.
Is caching right for you?
While caching can greatly improve application performance and reduce costs, it's not a one-size-fits-all solution. Take the time to analyze the specific needs and nuances of your product, and consult with your development team to determine whether caching aligns with your goals. If done right, caching can be a powerful tool in your toolkit.
In Vets API, we have Coverband available to us. Coverband tracks which parts of the codebase are being executed and what parts are not, allowing you to get an idea of how your code is being utilized in a live environment. Visit /coverband
in each live API environment. Coverband is behind Github Teams-based OAuth. This data may aid in whether or not you decide to integrate caching into your solution.
Before diving into caching, here are some key considerations:
Data Retrieval Times: Does retrieving data from a distinct third-party call take a significant amount of time, leading to a noticeable lag for the user? If a given third-party service (e.g. something on the VA network) is slow, then caching can be beneficial.
Data Freshness: How often does your data change? If it's infrequent, caching can be a great asset. On the other hand, if the data is highly dynamic and changes every few seconds, caching might introduce complexities without significant benefits.
Consistency vs. Speed: Is it okay if users see slightly outdated data for a short period, or is real-time consistency crucial for your product? Caching often involves a trade-off between speed and data accuracy.
Cache types and storage
Key-Value Caching: Ideal for storing individual data entities with a unique key.
Elasticache with Redis: Using AWS's managed Redis service to handle cache storage in live environments.
Cache integration
Configuration: Leveraging existing configurations for the
redis-namespace
gem.redis.yml: Defines keys, namespaces, and TTL (Time To Live).
AWS Elasticache Setup: Using Elasticache as the caching layer for the API, with namespaces ensuring distinct key separation.
Redis-namespace integration: how to
Caching key-value pairs using the redis store pattern/redis namespace gem requires manual configuration for each specific use case; it doesn't come automated by default.
The redis-namespace
gem is an invaluable tool when working with Redis in shared environments. By creating a namespace for Redis keys, it ensures distinct key separation by use key prefixing to organize data, reducing the risk of key collisions. Each live environment has a dedicated AWS Elasticache instance (running Redis). See redis-namespace documentation. Additionally, Sidekiq has a dedicated redis instance along with Rails.cache.
Why use redis-namespace?
Key Separation: Especially in environments where multiple applications or services are using the same Redis instance, namespacing can prevent data conflicts. In Vets API, we use a shared instance amongst the application for caching.
Organized Cache Management: Grouping related keys under a common namespace makes cache management tasks, like bulk deletion of keys or cache analysis, much more straightforward.
Flexibility: While organizing keys, the gem doesn't restrict any Redis functionality. It acts as a thin wrapper and allows full use of the Redis features.
Creating a new redis-namespace key
Redis namespace is integrated into the Vets API Gemfile. After running bundle install locally, redis-namespace is ready for use. See redis.yml file in Vets API for examples. In live environments, there is a dedicated Elasticache instance for caching purposes using the redis namespace gem. The each_ttl
key is used for key expiration times and this will vary depending on use case.
my_attributes:
namespace: some-attributes-namespace-here
each_ttl: 86400 (make this unique to your case)
Using the gem in vets-api
Duplicate namespaces can lead to data collisions, where multiple components inadvertently overwrite or interfere with each other’s data, resulting in unpredictable behavior or bugs. In other words, don’t call Redis::Namespace.new('my-namespace'...)
more than once.
To use the name-spaced client, we have a base class model called redis_store.rb. Across the application, a pattern is followed to inherit from the redis_store.rb
as a base class. See any example that inherits from < Common::RedisStore
namely this example for VaProfileRedis. All standard Redis operations can be executed using the namespaced client and base class. The gem ensures that the keys are prefixed with the specified namespace automatically.
Best practices
Consistent Naming: Ensure that the namespace is meaningful and consistent across related keys to easily identify and manage them.
Namespace Length: While defining namespaces, be conscious of the added length to every Redis key. Overly long namespaces could contribute to increased memory usage.
Avoid duplicates: Only call
Redis::Namespace.new
once for your namespace.
Monitoring and alerts
We have an existing monitor for Redis/Elasticache in both sandbox and production environments. This Datadog monitor will send alerts if the cache hit rate falls below 15%.
References
Help and feedback
Get help from the Platform Support Team in Slack.
Submit a feature idea to the Platform.