Introduction to Redis common patterns

Syafdia Okta
Geek Culture
Published in
6 min readJul 26, 2021

--

Redis (Remote Dictionary Server) is a in memory data store which can we used as a database, cache, and message broker. The most apparent use cases for Redis is as cache store on the server side. Aside for caching, there are some patterns that can help use to solve our common problems.

Rate Limiting

You have a public end point, and you want to limit access to to the end point based on client IP address. Let’s say we want to limit the access by 100 requests per minute. By using built in Redis GET and INCR function, we can build out rate limiter function for our end point. The flow is quite simple:

  • Create a key based on client IP and current minute, i.e., 10.1.89.100:15.
  • By using the key for each request to our end point fetch the value from Redis.
  • If the value is greater than 100 break the process.
  • If the value is lesser or equals 100 continue the process.
  • Increase the value using INCR based on key.
  • To clear our Redis storage from unused data, set the key expiration to 1 minute using EXPIRE

Ok, let’s implement rate limiter on our end point:

We create “GET /pokemons” end point to return array of Pokemon data and we limit end point usage to 5 requests per minute. For each request, we use client IP as key, and increase the counter. When the counter is greater than 5, we immediately respond with 429 HTTP status, so the clients should wait for one more minute to perform their next request.

Locking

Sometimes we want our application doing exactly one same piece of work at the same time, and we need to block the other requests until the previous one has been finished.

Suppose we have “POST /generate_reports” end point, the end point is responsible to normalize data on the database and create a report based on normalized data. We want one user only can generate one report at the same time. By using Redis SET with NX and EX command, we can create a lock to protect our resource for multiple access. Here is the steps to create our locking mechanism:

  • Create a key based on user ID or some identifier.
  • Generate a random string as value.
  • Define a Time to live (TTL) to prevent deadlock on our lock, so wen the client failed to release the lock, the key will be cleared based on our TTL.
  • By using previous key, value, and TTL, put it to Redis using “SET {KEY} {VALUE} NX EX {TTL}” command. NX option only set the key if it does not already exist, and return nil when key already exist.
  • Run report generation process if previous command doesn’t return nil, otherwise break immediately.
  • When the report has been generated, we need to release the lock. To avoid key deletion by another client, on previous step we create a non-guessable random string as value. So we will create a script that only removes the key if the value matches.
if redis.call("GET",KEYS[1]) == ARGV[1]
then
return redis.call("DEL",KEYS[1])
else
return 0
end
  • After the lock has been released, we can accept another request.

Here is example of our lock implementation:

Redis author of Redis proposed more robust algorithm for locking called Redlock. I’ve write article about Redlock implementation on this post.

Job Queue

Job queue definition Based on Wikipedia:

In system software job queue (sometimes batch queue), is a data structure maintained by job scheduler software containing jobs to run. Users submit their programs that they want executed, “jobs”, to the queue for batch processing.

In a nutshell by using job queue, we can use asynchronous communication between multiple process by putting information about our task to be performed inside a queue.

The operation on the queue is First In First Out (FIFO). Redis has built in Lists data structure. By using BRPOP and LPUSH command on Lists, we can implement our own job queue.

First we need to create a worker, to pull job from the queue

  • We use “BRPOP {QUEUE_NAME} 0” command to pull job from the queue, the command will be blocking our program until it receive a job.
  • Since the previous command will be terminated after receiving a job, we should put it inside infinite loop.
  • When the job has been received, process the Job.

Thats it, next we will create a handler for pushing the job in to queue

  • We simply create a HTTP end point “POST /send_welcome_email”, it has responsibility to get email, and put the email in to queue so the worker will be send the email asynchronously.
  • Push the email to the queue using “LPUSH {QUEUE_NAME} {EMAIL}”.

Try to run the worker and HTTP handler at separated tab on your terminal. When you send POST request with email data to /send_welcome_email end point, our worker will immediately pull the job and process the email address.

Cool, we’ve just implementing our own job queue system, but this is not production ready. If you want to use queue on the production by using Redis, you can use Sidekiq, this article explains how Sidekiq work on the inside.

Pub Sub

I’m just quoting Wikipedia again, XD

In software architecture, publish–subscribe is a messaging pattern where senders of messages, called publishers, do not program the messages to be sent directly to specific receivers, called subscribers, but instead categorize published messages into classes without knowledge of which subscribers, if any, there may be. Similarly, subscribers express interest in one or more classes and only receive messages that are of interest, without knowledge of which publishers, if any, there are.

So basically, pub sub is a pattern where the publisher send message to a channel, and the subscriber receive the message from the channel. And the publisher doesn’t care who is the subscriber and what is the purpose of the subscriber.

Redis command for using pub sub is quite straight forward, we call SUBSCRIBE if we want to subscribe to a channel, and call PUBLISH if we want to post a message to the given channel.

Since Redis provides us built in pub sub implementation, let’s focus on how we use it. Suppose we have a Cinema App, we have a feature to book seat at the cinema and process the payment. For this feature, we develop 3 services, let’s call it:

  • HTTP Seat Manager App
  • Pub Sub Payment App
  • Pub Sub Seat Manager App

When user one to watch a movie on the cinema, they will visit our web page, and choose the seat via HTTP Seat Manager App, when seat has been blocked for the user, and it will publish to BLOCK_SEAT_SUCCESS channel.

Our Pub Sub Payment App will listen to BLOCK_SEAT_SUCCESS channel, and when it receives a message through those channel, it will make a payment via “make_payment” function, and if the payment is succeed a message will be published to MAKE_PAYMENT_SUCCESS channel. And when payment failed, it will publish another message via MAKE_PAYMENT_FAILED channel.

Pub Sub Payment App has “refund” function in case there is a message from ALLOCATE_SEAT_FAILED channel. Since our Pub Sub Seat Manager App will publish a message via this channel when error occurred on “allocate_seat” function.

Pub Sub Seat Manager App will listen to MAKE_PAYMENT_SUCCESS channel, if it receive data through that channel, it will call “allocate_seat” function for allocation sit for the user. And it has “unblock_seat” function in case it receive message from MAKE_PAYMENT_FAILED or ALLOCATE_SEAT_FAILED channels, so the seat can be released for the other process.

Yep, thats our example on using pub sub feature by Redis. Run it on separated terminal and you can se the output. And try adjusting the parameter, make it failed or success, and see the other process will run based on subscribed channel.

We’ve just implementing some common patterns using Redis, but the patterns are not limited to this article only, we can implement Geospatial, IP Range Indexing, Full Text Search, Partitioned Index, Bloom Filter, etc. You can see full source code for the article on this repository.

Thanks

Reference

--

--