Restore a Replica Set to a New Environment with Percona Backup for MongoDB

Restore a Replica Set to a New Environment with Percona Backup for MongoDB

Restore a Replica Set to a New Environment with Percona Backup for MongoDBPercona Backup for MongoDB (PBM) is our open source tool for backing up MongoDB clusters. Initially, the tool was developed for restoring the backups in the same environment they are taken. In this post, I will show you how to restore a backup to a new environment instead.

Let’s assume you followed the instructions to install Percona Backup for MongoDB packages on your newly provisioned replica set, and you already have at least one full backup of the source stored in remote backup storage.

Create the Backup User

Note: I am using a 3-node replicaset running in Centos 7 for this example.

The first step is to create the backup role on the target cluster’s primary:

db.getSiblingDB("admin").createRole({ "role": "pbmAnyAction",
      "privileges": [
         { "resource": { "anyResource": true },
           "actions": [ "anyAction" ]
         }
      ],
      "roles": []
   });

Now, let’s also create the backup user and give it the proper permissions:

db.getSiblingDB("admin").createUser({user: "pbmuser",
       "pwd": "secretpwd",
       "roles" : [
          { "db" : "admin", "role" : "readWrite", "collection": "" },
          { "db" : "admin", "role" : "backup" },
          { "db" : "admin", "role" : "clusterMonitor" },
          { "db" : "admin", "role" : "restore" },
          { "db" : "admin", "role" : "pbmAnyAction" }
       ]
    });

Configure PBM Agent

The next step is configuring the credentials for pbm agent on each server. It is important to point each agent to its local node only (don’t use the replicaset uri here):

tee /etc/sysconfig/pbm-agent <<EOF
PBM_MONGODB_URI="mongodb://pbmuser:secretpwd@localhost:27017"
EOF

Now we can start the agent on all nodes of the new cluster:

systemctl start pbm-agent

We have to specify the location where backups are stored. This is saved inside MongoDB itself. The easiest way to load the configuration options at first is to create a YAML file and upload it. For example, given the following file:

tee /etc/pbm-agent-storage.conf <<EOF
type:s3
s3:
   region: us-west-2
   bucket: pbm-test-bucket-78967
   credentials:
      access-key-id: "your-access-key-id-here"
      secret-access-key: "your-secret-key-here"
EOF

Use the pbm config –file command to save (or update) the admin.pbmConfig collection, which all pbm-agents will refer to.

$ pbm config --file=/etc/pbm-agent-storage-local.conf 
[Config set]
------
pitr:
  enabled: false
storage:
  type: filesystem
  filesystem:
    path: /backup

Backup list resync from the store has started

Sync the Backups and Perform the Restore

As you can see, PBM automatically starts scanning the remote destination for backup files. After a few moments, you should be able to list the existing backups:

$ pbm list --mongodb-uri mongodb://pbmuser:secretpwd@localhost:27017/?replicaSet=testRPL
Backup snapshots:
  2020-11-02T16:53:53Z
PITR <off>:
  2020-11-02T16:54:15 - 2020-11-05T11:43:26

Note: in the case of a sharded cluster, the above connection must be to the config server replica set.

You can also use the following command if you need to re-run the scan for any reason:

pbm config --force-resync

The last step is to fire off the restore:

$ pbm restore 2020-11-02T16:53:53Z --mongodb-uri mongodb://pbmuser:secretpwd@localhost:27017/?replicaSet=testRPL
...Restore of the snapshot from '2020-11-02T16:53:53Z' has started

We can check the progress by tailing the journal:

$ journalctl -u pbm-agent -f

Nov 05 13:00:31 mongo0 pbm-agent[10875]: 2020-11-05T13:00:31.000+0000 [INFO] got command restore [name: 2020-11-05T13:00:31.580485314Z, backup name: 2020-11-02T16:53:53Z] <ts: 1604581231>
Nov 05 13:00:31 mongo0 pbm-agent[10875]: 2020-11-05T13:00:31.000+0000 [INFO] restore/2020-11-02T16:53:53Z: restore started
Nov 05 13:00:34 mongo0 pbm-agent[10875]: 2020-11-05T13:00:34.918+0000        preparing collections to restore from
Nov 05 13:00:35 mongo0 pbm-agent[10875]: 2020-11-05T13:00:35.011+0000        reading metadata for admin.pbmRUsers from archive on stdin
Nov 05 13:00:35 mongo0 pbm-agent[10875]: 2020-11-05T13:00:35.051+0000        restoring admin.pbmRUsers from archive on stdin
Nov 05 13:00:35 mongo0 pbm-agent[10875]: 2020-11-05T13:00:35.517+0000        restoring indexes for collection admin.pbmRUsers from metadata
Nov 05 13:00:35 mongo0 pbm-agent[10875]: 2020-11-05T13:00:35.548+0000        finished restoring admin.pbmRUsers (3 documents, 0 failures)
Nov 05 13:00:35 mongo0 pbm-agent[10875]: 2020-11-05T13:00:35.548+0000        reading metadata for admin.pbmRRoles from archive on stdin
Nov 05 13:00:35 mongo0 pbm-agent[10875]: 2020-11-05T13:00:35.558+0000        restoring admin.pbmRRoles from archive on stdin
Nov 05 13:00:36 mongo0 pbm-agent[10875]: 2020-11-05T13:00:36.011+0000        restoring indexes for collection admin.pbmRRoles from metadata
Nov 05 13:00:36 mongo0 pbm-agent[10875]: 2020-11-05T13:00:36.031+0000        finished restoring admin.pbmRRoles (2 documents, 0 failures)
Nov 05 13:00:36 mongo0 pbm-agent[10875]: 2020-11-05T13:00:36.050+0000        reading metadata for admin.test from archive on stdin
Nov 05 13:00:36 mongo0 pbm-agent[10875]: 2020-11-05T13:00:36.061+0000        restoring admin.test from archive on stdin
Nov 05 13:01:09 mongo0 pbm-agent[10875]: 2020-11-05T13:01:09.775+0000        no indexes to restore
Nov 05 13:01:09 mongo0 pbm-agent[10875]: 2020-11-05T13:01:09.776+0000        finished restoring admin.test (1000000 documents, 0 failures)
Nov 05 13:01:09 mongo0 pbm-agent[10875]: 2020-11-05T13:01:09.901+0000        reading metadata for admin.pbmLockOp from archive on stdin
Nov 05 13:01:09 mongo0 pbm-agent[10875]: 2020-11-05T13:01:09.993+0000        restoring admin.pbmLockOp from archive on stdin
Nov 05 13:01:11 mongo0 pbm-agent[10875]: 2020-11-05T13:01:11.379+0000        restoring indexes for collection admin.pbmLockOp from metadata
Nov 05 13:01:11 mongo0 pbm-agent[10875]: 2020-11-05T13:01:11.647+0000        finished restoring admin.pbmLockOp (0 documents, 0 failures)
Nov 05 13:01:11 mongo0 pbm-agent[10875]: 2020-11-05T13:01:11.751+0000        reading metadata for test.test from archive on stdin
Nov 05 13:01:11 mongo0 pbm-agent[10875]: 2020-11-05T13:01:11.784+0000        restoring test.test from archive on stdin
Nov 05 13:01:27 mongo0 pbm-agent[10875]: 2020-11-05T13:01:27.772+0000        no indexes to restore
Nov 05 13:01:27 mongo0 pbm-agent[10875]: 2020-11-05T13:01:27.776+0000        finished restoring test.test (533686 documents, 0 failures)
Nov 05 13:01:27 mongo0 pbm-agent[10875]: 2020-11-05T13:01:27.000+0000 [INFO] restore/2020-11-02T16:53:53Z: mongorestore finished
Nov 05 13:01:30 mongo0 pbm-agent[10875]: 2020-11-05T13:01:30.000+0000 [INFO] restore/2020-11-02T16:53:53Z: starting oplog replay
Nov 05 13:01:30 mongo0 pbm-agent[10875]: 2020-11-05T13:01:30.000+0000 [INFO] restore/2020-11-02T16:53:53Z: oplog replay finished on {0 0}
Nov 05 13:01:30 mongo0 pbm-agent[10875]: 2020-11-05T13:01:30.000+0000 [INFO] restore/2020-11-02T16:53:53Z: restoring users and roles
Nov 05 13:01:31 mongo0 pbm-agent[10875]: 2020-11-05T13:01:31.000+0000 [INFO] restore/2020-11-02T16:53:53Z: restore finished successfully

Conclusion

Percona Backup for MongoDB is a must-have tool for sharded environments, because of multi-shard consistency. This article shows how PBM can be used for disaster recovery; everything is simple and automatic.

A caveat here is that unless you want to go into the rabbit hole of manual metadata renaming, you should keep the same replica set names on both the source and target clusters.

If you would like to follow the development, report a bug, or have ideas for feature requests, make sure to check out the PBM project in the Percona issue tracker.


by Ivan Groenewold via Percona Database Performance Blog

Comments

Popular posts from this blog