Did you know you can share UniData data files between servers using Python & Redis?

Today I want to talk about this little known, new feature of UniData 8.2.1 that is not yet documented.

UniData 8.2.1 (on Linux only) offers a new way to implement sharing of hashed files between multiple database instances by hosting these files in Redis. The big advantage of this approach over NFA and EDA is that it supports limited distributed locking.

I’ll demonstrate the new functionality with an example.

Suppose I need to share the PRODUCTS file located in the XDEMO account between two instances of UniData.

First, I create two instances of UniData – udt1 and udt2 on Linux boxes den-vm-udt1.rocketsoftware.com and den-vm-udt2.rocketsoftware.com accordingly. Then, I enable Python on both servers.

I also need Redis so I install it on the den-vm-redis.rocketsoftwared.com machine.

Next, I configure my UniData servers to work with Redis. To do that, on each server I create a file called .u2clusterconfig in its $UDTHOME directory and set some properties that contain the Redis location, port and lock release timeout for files that it hosts.

The content of my .u2clusterconfig is listed below

redis_host=den-vm-redis.rocketsoftware.com

redis_port=6379

redis_lock_expiretime=600000

Bear in mind that:

  • The default value for redis_lock_expiretime is 300000 milliseconds (5 min)
  • No whitespace characters are allowed inside a property definition
  • You must restart UniData for .u2clusterconfig changes to take effect

Once I establish the link between UniData instances and Redis through the .u2clusterconfig file, I am ready to move the PRODUCTS file to Redis and make both servers point to it.

There are four Python utilities that help with this kind of task. They live in the $UDTBIN folder and their names and a very brief description is listed below:

  1. redis2ud.py – converts hashed file from Redis to UniData
  2. ud2redis.py – converts hashed file from UniData to Redis
  3. mark2redis.py – marks a file as a Redis file
  4. mark2ud.py – mark a file as a UniData file

For my purpose, I will use two of these four utilities: ud2redis.py and mark2redis.py to convert the PRODUCTS file to Redis and mark it as a Redis file on both servers.

Here are the steps I perform:

  1. Connect to udt1 box through SSH
  2. cd /path/to/XDEMO
  3. Move the PRODUCTS file to Redis and mark it as a Redis file by executing the following command:
    python ../bin/ud2redis.py PRODUCTS
  4. Connect to udt2 box through SSH
  5. cd /path/to/XDEMO
  6. Mark the PRODUCTS file as a Redis file by executing the following command:
    python ../bin/mark2redis.py PRODUCTS
    (we just need to mark a file as a Redis file here because we have already moved the content of it to Redis in step 3)

As the result of my actions, the key u2file:PRODUCTS is created in Redis and you can see it by executing the keys * command in a Redis client.

Finally, I achieved my goal and the PRODUCTS file is now hosted in Redis and is shared between my two UniData instances – udt1 and udt2.

Now if I lock a record (for example 9736154947 “Never say never again”) in the PRODUCTS file with the BASIC READU command that runs on udt1, it will also be locked on udt2 and the LIST.READU command on udt2 will return the information about the lock.  I can release the lock explicitly by, for example, executing a WRITE command on ud1 or it will expire automatically after a certain time that is defined in .u2clusteconfig file by the redis_lock_expire_time property. In my case, it is 10 min (600000 milliseconds).

Below are some important details to keep in mind about lock implementation for Redis hosted files:

  • This is not a full featured distributed lock management
  • Locks are managed in Redis
  • UniData Lock Manager is bypassed for Redis files
  • There is no queueing support. The Wait and Retry technique is required. That means that UniData will return an error immediately to an application that requests an exclusive lock on a record and it is not available. As a result of that, an application must try to obtain the lock again and not assume that UniData will queue the request.
  • Timeout-based lock cleanup
  • We cannot guarantee data integrity of operations that last longer than a timeout

Other useful facts to know about this topic are:

  • The default namespace for the lock keys is u2lock.
  • Only exclusive locks are managed in Redis. Shared locks are managed in UniData
  • Cleanup of locks left by dead UniData sessions in Redis is done by setting a predefined lock expiration time in .u2clusterconfig as the value of the redis_lock_expiretime property.
  • You can use TTL command in a Redis client to track the remaining time on a lock.

As you can see, this feature is straightforward to use. If you find it interesting, please try it and if you have any comments or questions feel free to comment on this post or email me at psmelianski@rocketsoftware.com.

0 Comments

Leave a Comment

Your email address will not be published.