Afaan Ashiq

Software Engineering

Migration to AWS Serverless Elasticache

February 21, 2024 2 min read

In this post, we will talk through an interesting problem I came across recently when migrating from provisioned AWS Elasticache to the serverless Elasticache offering.


Switching to serverless Elasticache

I recently had a backend workload which was placed behind a Redis cache. This Redis cache was deployed in the system as a provisioned Elasticache service.

We would soon be deploying a feature which would have meant the volume of data that needed to be cached would be highly variable. Especially when comparing those same deployments across different environments.

On 27th November 2023, AWS announced a new serverless Elasticache offering. This solved quite a big problem for us in that we would no longer have to worry about right-sizing and provisioning the serverless Elasticache deployment.

So it made sense to migrate from the provisioned offering to the serverless alternative.


The original configuration

The Redis cache endpoint was provided to the relevant backend workload as something like the following:

"redis://<address>>:<port>"

When the backend workload was hooked up to the provisioned Elasticache service this worked a charm. Another thing to note is that this workload was a Django application.

And finally one last thing to make note of is that the "redis://" prefix creates a normal TCP socket connection with the provided Redis endpoint.


The problem

So I redeployed the Redis cache as the serverless Elasticache offering and simulated an action which would have hit the Redis cache.

And the request just hung. No response from the Redis cache. Yikes!


The solution

After banging me head against the wall.

Reading through the AWS blog post for their serverless Elasticache offering. One thing popped out.

Unlike the provisioned offering, the serverless Elasticache service enforces SSL connections by default.

This means that switching from the provisioned Elasticache service to the serverless offering is not an entirely backwards-compatible change.

The Redis cache endpoint which was provided to the Django backend workload was switched to something that resembles the following:

"rediss://<address>>:<port>"

And no, the extra s in the "rediss://<address>" prefix is not a typo. In fact, this is the native redis-py URL notation for signifying an SSL wrapped TCP connection in the connection string.


Summary

When orchestrating what you might think should be a simple and straightforward change. It’s important to not gloss over the details of documentation and release notes!

Doing so, might cost you additional time like it did me.


References