On multithreaded programming environment, we use locks to limit access to shared resources. An example of a lock is a mutex, it name stand for mutual exclusion. A mutex is used to guard shared data (array, linked list, hash map or any simple primitive type) by allowing only a single thread to access the shared data.
When it came to distributed system, it is hard to ensure our multiple instances of applications doing exactly one same piece of work at a time. For example calling some external API, writing to external storage, and perform some heavy computations. Martin Kleppman, the author of Designing Data-Intensive Applications book, mentioned on his blog, there are two reasons why we need locks on distributed application:
- Efficiency: Saves us from unnecessarily doing the same work twice.
- Correctness: Prevents concurrent processes from stepping on each others’ toes and messing up the state of our system.
Salvatore Sanfilippo, the author of Redis proposed an algorithm called Redlock to use Redis as Distributed Lock Manager (DLM). You can see the detail of Redlock algorithm on this page. The way we using Redis to implementing distributed locks is by setting a unique key with time to live (TTL), and when client has been done using the resource, it will delete the key on Redis. And in case something wrong happened on the client side, Redis will automatically release the lock based on the TTL. Redlock is designed to be operated over a minimum of 3 machines with independent Redis instances to avoids any single point of failure.
Let’s create our implementation of distributed locks using Redlock, on this article, we will implemented it using Go. First we create Locker interface to perform Lock and Unlock operation.
Then we create struct to hold some configuration for our Locker implementation.
- redisClients: Since we use multiple Redis instances, we store the clients in slices. We use go-redis as client for this tutorial.
- expiration: The lock will be released automatically based on given TTL.
- drift: The drift to account for Redis expires.
- quorum: Quorum will be used to count how many failures can be tolerated. If the client failed to acquire the lock lesser than N/2+1, we will try to unlock the resources on all instances. N stand for the number of our Redis instances.
- name: Name is the value will be used as Redis key, we should use unique name for each lock.
- value: Value is a random string, so the lock will be removed only if it is still the one that was set by the client trying to remove it.
To acquire the lock, we need to set the key only if it does not already exist using NX option.
If the key has been successfully set and we don’t reach the expiration time when setting the data, we should increase the success counter. And if our total success counter is lesser than defined quorum, we will call Unlock method to release all the locks. And for the Unlock implementation, wee simply loop all the clients, and unlock each client.
And to finalize the implementation, let’s create a struct to hold the connections and constructor for out Locker implementation.
DLM struct will hold the Redis connection and we will used it as singleton instance. And NewLocker will create new instance of Locker implementation. We create generateRandomString to generate simple random value which will be used as lock value. Feel free to adjust the random value generator based on your need.
Now let’s try our implementation, we will use 10 seconds as expiration. And we will call Unlock immediately after executing some process (we put 1 second sleep inside the process).
And here is the result, we able to execute the process successfully.
Now, let’s remove the Unlock section on our demo code.
On first attempt, it will be succeed, but for the second attempt, we got an error since the lock is not released by previous execution.
And let’s wait for 15 seconds, and run the demo again. The code will executed successfully, since the lock will be released automatically in 10 seconds.
We’ve just implementing Redlock algorithm using Redis for distributed locks. You can see full source code for this article on this repository.