Navigation
  • Restoration >
  • Restore a Cluster from a Continuous Backup Snapshot or Point in Time

Restore a Cluster from a Continuous Backup Snapshot or Point in Time

Feature unavailable in Free and Shared-Tier Clusters

This feature is not available for M0 (Free Tier), M2, and M5 clusters. To learn more about which features are unavailable, see Atlas M0 (Free Tier), M2, and M5 Limitations.

Atlas lets you restore data from a scheduled continuous backup snapshot or from a selected point between snapshots. For replica sets, you can restore from selected points in time within the last 24 hours. For sharded clusters you can restore from checkpoints between snapshots within the last 24 hours.

You must restore a backup to an Atlas cluster running the same major release version of MongoDB as the cluster that you want to restore.

Tip

You can still use backups made before an upgrade. For example, you can restore a 3.6 cluster to 4.0 with the following procedure:

  1. Restore the old 3.6 backup to another 3.6 cluster.
  2. Upgrade the restored cluster to 4.0.

For instructions on restoring data from a cloud provider snapshot, see Restore a Cluster from a Cloud Provider Snapshot.

Considerations

Downtime for the Target Cluster

The restore process requires downtime for the target cluster.

MongoDB Versions Must Be Compatible

The MongoDB versions must also be compatible. For instance, you cannot restore from a snapshot of a 4.0 cluster to a 3.6 or earlier cluster.

Restore to Atlas or Cloud Manager

If you have the proper project permissions, you can restore to a cluster of a different project in either Atlas or Cloud Manager:

Restore to Project on Required Roles on Destination Project
Atlas Project Owner
Cloud Manager

One of the following Cloud Manager roles:

Prerequisites

Stop Client Operations during Restoration

You must ensure that the target Atlas cluster does not receive client requests during restoration. You must either:

  • Restore to a new Atlas cluster and reconfigure your application to use that new cluster once the new deployment is running, or
  • Ensure that the target Atlas cluster cannot receive client requests while you restore data.

Procedure

1

Click Backup, then the Overview tab.

The Overview lists the project’s clusters.

  • If backup is enabled for the cluster, the Status is Active.
  • If backup is disabled for the cluster, the Status is Inactive.
2

Choose the cluster to restore.

  • Hover over the Active status of the cluster and click Restore or Download or
  • From the ellipsis h icon menu next to the cluster select Restore.
3

Select the restore point.

  1. Choose the point from which you want to restore your backup.

    Restore Type Description Action
    Snapshot Allows you to choose one stored snapshot. Select an existing snapshot to restore.
    Point In Time

    Creates a custom snapshot that includes all operations up to but not including the selected time. By default, the Oplog Store stores 24 hours of data.

    Example

    If you select 12:00, the last operation in the restore is 11:59:59 or earlier.

    PIT Restore Timeframe Limits

    You cannot perform a PIT restore that covers any time prior to the latest backup resync.

    You must enable cluster checkpoints to perform a PIT restore on a sharded cluster.

    If no checkpoints that include your date and time are available, Atlas asks you to choose another point in time.

    Select a Date and Time.
    Oplog Timestamp (Replica Sets Only)

    Creates a custom snapshot that includes all operations up to and including the entered Oplog timestamp. The Oplog Timestamp is represented as two fields:

    Timestamp timestamp in the number of seconds that have elapsed since the UNIX epoch
    Increment Order of operation applied in that second as a 32-bit ordinal.

    Type an Oplog Timestamp and Increment.

    Run a query against local.oplog.rs on your replica set to find the desired timestamp.

  2. Click Next.

Finding the latest Oplog Entry

To find the latest Oplog entry, run the following query in a mongo shell:

db.getSiblingDB('local').oplog.rs.find().sort({$natural:-1}).limit(1).pretty()
{
  "ts": Timestamp(1537559320, 1),
  "h": NumberLong("-2447431566377702740"),
  "v": 2,
  "op": "n",
  "ns": "",
  "wall": ISODate("2018-09-21T19:48:40.708Z"),
  "o": {
    "msg": "initiating set"
  }
}

The parts of the ts value correspond to the values you need for the Timestamp and Increment boxes.

Note

To translate the epoch time into a human-readable timestamp, try using a tool like Epoch Converter

MongoDB does not endorse this service. Its reference is intended only as informational.

4

Choose to restore the files to another cluster.

  1. Click Choose Cluster to Restore to.

  2. Complete the following fields:

    Field Action
    Project Select a project to which you want to restore the snapshot.
    Cluster to Restore to

    Select a cluster to which you want to restore the snapshot.

    Atlas must manage the target replica set.

    Warning

    Automation removes all existing data from the cluster. All backup data and snapshots for the existing cluster are preserved.

  1. Click Restore.

    Atlas notes how much storage space the restore requires.

5

Click Restore.