How To Migrate Redis Data to a DigitalOcean Managed Database

Prerequisites

To complete this tutorial, you will need:

  • Redis installed on your server. To set this up, follow Step 1 of our guide on How To Install and Secure Redis on Ubuntu 18.04.

  • A Redis instance managed by a DigitalOcean. To provision one, see our Managed Redis Product Documentation.

  • Stunnel, an open-source proxy used to create TLS tunnels between machines, installed on your server and configured to maintain a secure connection with your Managed Redis Database. This is necessary because DigitalOcean Managed Databases require connections to be made securely over TLS. Complete our tutorial on How To Connect to a Managed Redis Instance over TLS with Stunnel and redis-cli to set this up. Please note, however, that you do not need to install the redis-tools package in Step 1, since you will have already installed redis-cli when you installed Redis in the previous prerequisite tutorial.

Note: To help keep things clear, this guide will refer to the Redis instance hosted on your Ubuntu server as the “source.” Likewise, it will refer to the instance managed by DigitalOcean as either the “target” or the “Managed Database.”

Things To Consider When Migrating Redis Data to a Managed Database

There are several methods you can employ to migrate data from one Redis instance to another. However, some of these approaches present problems when you’re migrating data to a Redis instance managed by DigitalOcean.

For example, you can use replication to turn your target Redis instance into an exact copy of the source. To do this, you would connect to the target Redis server and run the replicaof command with the following syntax:

replicaof source_hostname_or_ip source_port

This will cause the target instance to replicate all the data held on the source without destroying any data that was previously stored on it. Following this, you would promote the replica back to being a primary instance with the following command:

replicaof no one

However, Redis instances managed by DigitalOcean are configured to only become read-only replicas. If you have clients writing data to the source database, you won’t be able to configure them to write to the managed instance as it’s replicating data. This means you would lose any data sent by the clients after you promote the managed instance from being a replica and before you configure the clients to begin writing data to it, making replication suboptimal migration solution.

Another method for migrating Redis data is to take a snapshot of the data held on your source instance with either Redis’s save or bgsave commands. Both of these commands export the snapshot to a file ending in .rdb, which you would then transfer to the target server. Following that, you’d restart the Redis service so it can load the data.

However, many managed database providers — including DigitalOcean — don’t allow you to access the managed database server’s underlying file system. This means there’s no way to upload the snapshot file or make the necessary changes to the target database’s configuration file to allow the Redis to import the data.

Because the configuration of DigitalOcean’s Managed Databases limit the efficacy of both replication and snapshotting as means of migrating data, this tutorial will instead use Redis’s migrate command to move data from the source to the target. The migrate command is designed to only move one key at a time, but we will use some handy command line tricks to move an entire Redis database with a single command.

Step 1 — (Optional) Loading Your Source Redis Instance with Sample Data

This optional step involves loading your source Redis instance with some sample data so you can experiment with migrating data to your Managed Redis Database. If you already have data that you want to migrate over to your target instance, you can move ahead to Step 2.

To begin, run the following command to access your Redis server:

redis-cli

If you’ve configured your Redis server to require password authentication, run the auth command followed by your Redis password:

auth password

Then run the following commands. These will create a number of keys holding a few strings, a hash, a list, and a set:

mset string1 "Redis" string2 "is" string3 "fun!"
hmset hash1 field1 "Redis" field2 "is" field3 "fast!"
rpush list1 "Redis" "is" "feature-rich!"
sadd set1 "Redis" "is" "free!"

Additionally, run the following expire commands to provide a few of these keys with a timeout. This will make them volatile, meaning that Redis will delete them after the specified amount of time, 7500 seconds:

expire string2 7500
expire hash1 7500
expire set1 7500

With that, you have some example data you can export to your target Redis instance. You can keep the redis-cli prompt open for now, since we will run a few more commands from it in the next step in order to back up this data.

Step 2 — Backing Up Your Data

Previously, we discussed using Redis’s bgsave command to take a snapshot of a Redis database and migrate it to another instance. While we won’t use bgsave as a means of migrating Redis data, we will use it here to back up the data in case we encounter an error during the migration process.

If you don’t already have it open, start by opening up the Redis command line interface:

redis-cli

Also, if you’ve configured your Redis server to require password authentication, run the auth command followed by your Redis password:

auth password

Next, run the bgsave command. This will create a snapshot of your current data set and export it to a dump file whose name ends in .rdb:

bgsave

Note: As mentioned in the previous Things To Consider section, you can take a snapshot of your Redis database with either the save or bgsave commands. The reason we use the bgsave command here is that the save command runs synchronously, meaning it will block any other clients connected to the database. Because of this, the save command documentation recommends that this command should almost never be run in a production environment.

Instead, it suggests using the bgsave command which runs asynchronously. This will cause Redis to fork the database into two processes: the parent process will continue to serve clients while the child saves the database before exiting:

Note that if clients add or modify data while the bgsave operation is running or after it finishes, these changes won’t be captured in the snapshot.

Following that, you can close the connection to your Redis instance by running the exit command:

exit

If you need it in the future, you can find this dump file in your Redis installation’s working directory. If you’re not sure which directory this is, you can check by opening up your Redis configuration file with your preferred text editor. Here, we’ll use nano:

sudo nano /etc/redis/redis.conf

Navigate to the line that begins with dbfilename. It will look like this by default:

/etc/redis/redis.conf

. . .
# The filename where to dump the DB
dbfilename dump.rdb
. . .

This directive defines the file to which Redis will export snapshots. The next line (after any comments) will look like this:

/etc/redis/redis.conf

. . .
dir /var/lib/redis
. . .

The dir directive defines Redis’s working directory where any Redis snapshots are stored. By default, this is set to /var/lib/redis, as shown in this example.

Close the redis.conf file. Assuming you didn’t make any changes to the file, you can do so by pressing CTRL+X.

Then, list the contents of your Redis working directory to confirm that it’s holding the exported data dump file:

sudo ls /var/lib/redis

If the dump file was exported correctly, you will see it in this command’s output:

Outputdump.rdb

Once you’ve confirmed that you successfully backed up your data, you can begin the process of migrating it to your Managed Database.

Step 3 — Migrating the Data

Recall that this guide uses Redis’s internal migrate command to move keys one by one from the source database to the target. However, unlike the previous steps in this tutorial, we won’t run this command from the redis-cli prompt. Instead, we’ll run it directly from the server’s bash prompt. Doing so will allow us to use a few bash tricks to migrate all the keys on the source database with one command.

Note: If you have clients writing data to your source Redis instance, now would be a good time to configure them to also write data to your Managed Database. This way, you can migrate the existing data from the source to your target without losing any writes that occur after the migration.

Also, be aware that this migration command will not replace any existing keys on the target database unless one of the existing keys has the same name as a key you’re migrating.

The migration will occur after running the following command. Before running it, though, we will break it down piece by piece:

redis-cli -n source_database -a source_password scan 0 | while read key; do redis-cli -n source_database -a source_password MIGRATE localhost 8000 "$key" target_database 1000 COPY AUTH managed_redis_password; done

Let’s look at each part of this command separately:

redis-cli -n source_database -a source_password scan 0  . . .

The first part of the command, redis-cli, opens a connection to the local Redis server. The -n flag specifies which of Redis’s logical databases to connect to. Redis has 16 databases out of the box (with the first being numbered 0, the second numbered 1, and so on), so source_database can be any number between 0 and 15. If your source instance only holds data on the default database (numbered 0), then you do not need to include the -n flag or specify a database number.

Next, comes the -a flag and the source instance’s password, which together authenticate the connection. If your source instance does not require password authentication, then you do not need to include the -a flag.

It then runs Redis’s scan command, which iterates over the keys held in the data set and returns them as a list. scan requires that you follow it with a cursor — the iteration begins when the cursor is set to 0, and terminates when the server returns a 0 cursor. Hence, we follow scan with a cursor of 0 so as to iterate over every key in the set.

. . . | while read key; do . . .

The next part of the command begins with a vertical bar (|). In Unix-like systems, vertical bars are known as pipes and are used to direct the output of one process to the input of another.

Following this is the start of a while loop. In bash, as well as in most programming languages, a while loop is a control flow statement that lets you repeat a certain process, code, or command as long as a certain condition remains true.

The condition in this case is the sub-command read key, which reads the piped input and assigns it to the variable key. The semicolon (;) signifies the end of the while loop’s conditional statement, and the do following it precedes the action to be repeated as long as the while expression remains true. Every time the do statement completes, the conditional statement will read the next line piped from the scan command and assign that input to the key variable.

Essentially, this section says “as long as there is output from the scan command to be read, perform the following action.”

. . . redis-cli -n source_database -a source_password migrate localhost 8000 "$key" . . .

This section of the command is what performs the actual migration. After another redis-cli call, it once again specifies the source database number with the -n flag and authenticates with the -a flag. You have to include these again because this redis-cli call is distinct from the one at the start of the command. Again, though, you do not need to include the -n flag or database number if your source Redis instance only holds data in the default 0 database, and you don’t need to include the -a flag if it doesn’t require password authentication.

Following this is the migrate command. Any time you use the migrate command, you must follow it with the target database’s hostname or IP address and its port number. Here, we follow the convention established in the prerequisite stunnel tutorial and point the migrate command to localhost at port 8000.

$key is the variable defined in the first part of the while loop, and represents the keys from each line of the scan command’s output.

. . . target_database 1000 copy auth managed_redis_password; done

This section is a continuation of the migrate command. It begins with target_database, which represents the logical database on the target instance where you want to store the data. Again, this can be any number from 0 to 15.

Next is a number representing a timeout. This timeout is the maximum amount of idle communication time between the two machines. Note that this isn’t a time limit for the operation, just that the operation should always make some level of progress within the defined timeout. Both the database number and timeout arguments are required for every migrate command.

Following the timeout is the optional copy flag. By default, migrate will delete each key from the source database after it transfers them to the target; by including this option, though, you’re instructing the migrate command to merely copy the keys so they will persist on the source.

After copy comes the auth flag followed by your Managed Redis Database’s password. This isn’t necessary if you’re migrating data to an instance that doesn’t require authentication, but it is necessary when you’re migrating data to one managed by DigitalOcean.

Following this is another semicolon, indicating the end of the action to be performed as long as the while condition holds true. Finally, the command closes with done, indicating the end of the loop. The command checks the condition in the while statement and repeats the action in the do statement until it’s no longer true.

All together, this command performs the following steps:

  • Scan a database on the source Redis instance and return every key held within it

  • Pass each line of the scan command’s output into a while loop

  • Read the first line and assign its content to the key variable

  • Migrate any key in the source database that matches the key variable to a database on the Redis instance at the other end of the TLS tunnel held on localhost at port 8000

  • Go back and read the next line, and repeat the process until there are no more keys to read

Now that we’ve gone over each part of the migration command, you can go ahead and run it.

If your source instance only has data on the default 0 database, you do not need to include either of the -n flags or their arguments. If, however, you’re migrating data from any database other than 0 on your source instance, you must include the -n flags and change both occurrences of source_database to align with the database you want to migrate.

If your source database requires password authentication, be sure to change source_password to the Redis instance’s actual password. If it doesn’t, though, make sure that you remove both occurrences of -a source_password from the command. Also, change managed_database_password to your own Managed Database’s password and be sure to change target_database to the number of whichever logical database on your target instance that you want to write the data to:

Note: If you don’t have your Managed Redis Database’s password on hand, you can find it by first navigating to the DigitalOcean Control Panel. From there, click on Databases in the left-hand sidebar menu and then click on the name of the Redis instance to which you want to migrate the data. Scroll down to the Connection Details section where you’ll find a field labeled password. Click on the show button to reveal the password, then copy and paste it into the migration command — replacing managed_redis_password — in order to authenticate.

redis-cli -n source_database -a source_password scan 0 | while read key; do redis-cli -n source_database -a source_password MIGRATE localhost 8000 "$key" target_database 1000 COPY AUTH managed_redis_password; done

You will see output similar to the following:

OutputNOKEY
OK
OK
OK
OK
OK
OK

Note: Notice the first line of the command’s output which reads NOKEY. To understand what this means, run the first part of the migration command by itself:

redis-cli -n source_database -a source_password scan 0

If you migrated the sample data added in Step 2, this command’s output will look like this:

Output1) "0"
2) 1) "hash1"
   2) "string3"
   3) "list1"
   4) "string1"
   5) "string2"
   6) "set1"

The value "0" held in the first line is not a key held in your source Redis database, but a cursor returned by the scan command. Since there aren’t any keys on the server named “0”, there’s nothing there for the migrate command to send to your target instance and it returns NOKEY.

However, the command doesn’t fail and exit. Instead, it continues on by reading and migrating the keys found in the next lines of the scan command’s output.

To test whether the migration was successful, connect to your Managed Redis Database:

redis-cli -h localhost -p 8000 -a managed_redis_password

If you migrated data to any logical database other than the default, connect to that database with the select command:

select target_database

Run a scan command to see what keys are held there:

scan 0

If you completed Step 2 of this tutorial and added the example data to your source database, you will see output like this:

Output1) "0"
2) 1) "set1"
   2) "string2"
   3) "hash1"
   4) "list1"
   5) "string3"
   6) "string1"

Lastly, run a ttl command on any key which you’ve set to expire in order to confirm that it is still volatile:

ttl string2
Output(integer) 3944

This output shows that even though you migrated the key to your Managed Database, it still set to expire based on the expireat command you ran previously.

Once you’ve confirmed that all the keys on your source Redis database were exported to your target successfully, you can close your connection to the Managed Database. If you have clients writing data to the source Redis instance and you’ve already configured them to send their writes to the target, you can at this point configure them to stop sending data to the source.

Conclusion

By completing this tutorial, you will have moved data from your self-managed Redis data store to a Redis instance managed by DigitalOcean. The process outlined in this guide may not be optimal in every case. For example, you’d have to run the migration command multiple times (once for every logical database holding data) if your source instance is using databases other than the default one. However, when compared to other methods like replication or snapshotting, it is a fairly straightforward process that works well with a DigitalOcean Managed Database’s configuration.

Now that you’re using a DigitalOcean Managed Redis Database to store your data, you could measure its performance by running some benchmarking tests. Also, if you’re new to working with Redis, you could check out our series on How To Manage a Redis Database.

Last updated