'How to sync data between two Elasticache Redis Instances

I have two AWS Elasticache instances, One of the instances (lets say instance A) has very important data sets and connections on it, downtime is unacceptable. Because of this situation, instead of doing normal migration (like preventing source data from new writes, getting dump on it, and restore it to the new one) I'm trying to sync instance A's data to another Elasticache instance (lets say instance B). As I said, this process should be downtime-free. In order to do that, I tried RedisShake, but because AWS restrict users to run certain commands (bgsave, config, replicaof,slaveof,sync etc), RedisShake is not working with AWS Elasticache. It's giving the error below.

2022/04/04 11:58:42 [PANIC] invalid psync response, continue, ERR unknown command `psync`, with args beginning with: `?`, `-1`, 
[stack]: 
    2   github.com/alibaba/RedisShake/redis-shake/common/utils.go:252
            github.com/alibaba/RedisShake/redis-shake/common.SendPSyncContinue
    1   github.com/alibaba/RedisShake/redis-shake/dbSync/syncBegin.go:51
            github.com/alibaba/RedisShake/redis-shake/dbSync.(*DbSyncer).sendPSyncCmd
    0   github.com/alibaba/RedisShake/redis-shake/dbSync/dbSyncer.go:113
            github.com/alibaba/RedisShake/redis-shake/dbSync.(*DbSyncer).Sync
        ... ...

I've tried rump for that matter, But it doesn't have enough stability to handle any important processes. First of all, it's not working as a background process, when the first sync finished, it's being closed with signal: exit done, so it will not be getting ongoing changes after the first finish. Second of all, it's recognizing created/modified key/values in each run, for example, in first run key apple equals to pear, it's synced to the destination as is, but when I deleted the key apple and its value in source and ran the rump syncing script again, it's not being deleted in destination. So basically it's not literally syncing the source and the destination. Plus, last commit to the rump github repo is about 3 years ago. It seems a little bit outdated project to me.

After all this information and attempts, my question is, is there a way to sync two Elasticache for Redis instances, as I said, there is no room for downtime in my case. If you guys with this kind of experience have a bulletproof suggestion, I would be much appreciated. I tried but unfortunately didn't find any.

Thank you very much,

Best Regards.



Solution 1:[1]

If those two Elasticache Redis clusters exist in the same account but different regions, you can consider using AWS Elasticache global-datastore.

It has some restrictions on the regions, type of nodes and that both the clusters should have same configurations in terms of number of nodes, etc.

Limitations - https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/Redis-Global-Datastores-Getting-Started.html

Otherwise, there's a simple brute-force mechanism and you would be able to code it yourself I believe.

  1. Create a client EC2 (let's call this Sync-er) pub-sub channel from your EC Redis instance A.
  2. Whenever there is a new data, Sync-er would make WRITE commands on EC Redis instance B.

NOTE - You'll have to make sure that the clusters are in connectable VPCs. Elasticache is only available to the resources within the VPC. If your Instance A and Instance B are in different VPCs, you'll have to peer them or connect them via TransitGateway.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1