Navigation

Live Migrate Your Sharded Cluster to Atlas

Atlas can perform a live migration of a source sharded cluster to an Atlas sharded cluster, keeping the cluster in sync with the remote source until you cut your applications over to the Atlas cluster. Once you reach the cutover step in the following procedure, you should stop writes to the source cluster by stopping your application instances, pointing them to the Atlas cluster, and restarting them.

Note

You cannot target a Global Cluster as the destination for Live Migration.

For a procedure on live migrating a replica set, see Live Migrate Your Replica Set to Atlas.

To begin, click on the ellipsis button and choose Migrate Data to this Cluster from the dropdown menu.

Note

On the Cluster list, the ellipsis button appears beneath the cluster name, as shown below. When you view a cluster’s details, the ellipsis appears on the right-hand side of the screen, next to the Connect and Configuration buttons.

The Live Import Migration button in the Cluster Modal

Prerequisites

Migration Path

Atlas live migration supports the following migration paths:

Source Sharded Cluster MongoDB Version Destination Sharded Cluster MongoDB Version
3.4 3.4
3.6 3.6
4.0 4.0

Network Access

Source Cluster Firewall Must Allow Access from Live Migration Server

Atlas Live Migration process streams data through a MongoDB-controlled application server. Atlas provides the IP ranges of the MongoDB Live Migration servers during the Live Migration process. Grant these IP ranges access to your source cluster to allow connectivity to the MongoDB Live Migration server.

Atlas Cluster IP Whitelist Must Allow Access From Your Application Servers

Atlas only allows connections to a cluster from entries in the project’s whitelist. You must add IP addresses such as application servers to the project whitelist manually. Do this before beginning the migration procedure.

Atlas temporarily adds the IP address of the Atlas migration servers to the project whitelist. During the migration procedure, you cannot edit or delete this entry. Atlas removes the entry automatically once the procedure completes.

For documentation on adding entries to the Atlas whitelist, see Configure Whitelist Entries.

Pre-Migration Validation

Atlas performs a number of validation checks on the source and destination cluster before starting the Live Migration procedure.

  • The source cluster must be a sharded cluster.

    If the source is replica set cluster, use Live Migration to migrate the cluster to a Atlas replica set first, then scale your cluster to a sharded cluster.

    If the source is a standalone, convert the standalone to a replica set first before using Live Migration. Then scale your cluster to a sharded cluster.

  • The source cluster must use CSRS (Config Server Replica Sets). See Replica Set Config Servers

  • Atlas must have connectivity to the hostname and port of:

  • Atlas must be able to stop and start the Sharded Cluster Balancer on the source cluster.

  • The source cluster has the same feature compatibility version and major MongoDB version as the destination cluster. The major MongoDB verison is the first two digits of the full version, e.g. 3.2.x, 3.4.x, or 3.6.x.

    To check the feature compatibility version of a host in the source cluster, run the following command from the Mongo shell:

    db.runCommand( { getParameter : 1, "featureCompatibilityVersion" : 1 } )
    

    Use the setFeatureCompatibilityVersion database command to set the featureCompatibilityVersion flag as needed.

  • The destination Atlas cluster must be a sharded cluster with the same number of shards as the source sharded cluster.

Source Cluster Security

Atlas only supports SCRAM for connecting to source clusters enforcing authentication.

If the source cluster enforces authentication, create a user with the following privileges:

  • Read all databases and collections. The readAnyDatabase role on the admin database covers this requirement.
  • Read the oplog.

Various built-in roles provide sufficient privileges. For example:

  • For source clusters running MongoDB version 3.4+ a user must have, at a minimum, both clusterMonitor and readAnyDatabase roles. For example:

    use admin
    db.createUser(
      {
        user: "mySourceUser",
        pwd: "mySourceP@$$word",
        roles: [ "clusterMonitor", "readAnyDatabase" ]
      }
    )
    
  • For source clusters running MongoDB version 3.2 a user must have, at a minimum, both clusterManager and readAnyDatabase roles, as well as read access on the local database. This requires a custom role, which you can create with the following commands:

    use admin
    db.createRole(
      {
        role: "migrate",
        privileges: [
          { resource: { db: "local", collection: "" }, actions: [ "find" ] }
        ],
        roles: ["readAnyDatabase", "clusterManager"]
      }
    )
    db.createUser(
      {
        user: "mySourceUser",
        pwd: "mySourceP@$$word",
        roles: [ "migrate" ]
      }
    )
    
  • For source clusters running MongoDB version 2.6 or 3.0 a user must have, at a minimum, the readAnyDatabase role. For example:

    use admin
    db.createUser(
      {
        user: "mySourceUser",
        pwd: "mySourceP@$$word",
        roles: [ "readAnyDatabase" ]
      }
    )
    

Specify the user name and password to Atlas when prompted by the Live Migration procedure.

If the source cluster uses a different authentication mechanism, to connect you can use the Migrate with mongomirror tool to migrate data from the source cluster to the destination Atlas cluster.

Index Key Limits

If your MongoDB deployment contains indexes with keys which exceed the index key limit, you must set the MongoDB server parameter failIndexKeyTooLong to false before starting the Live Migration procedure.

Note

Modifying indexes so that they contain no oversized keys is preferable to setting the failIndexKeyTooLong server parameter to false. See the server manual for strategies on dealing with oversized index keys.

Considerations

Source Cluster Balancer

Atlas Live Migration stops the sharded cluster balancer on the source cluster at the start of the procedure, and starts the balancer at the end of the procedure.

If you cancel live migration, Atlas restarts the balancer on the source cluster.

Destination Cluster Configuration

When configuring the destination Atlas cluster, consider the following:

  • The live migration process streams data through a MongoDB-managed application server. Each server runs on infrastructure hosted in the nearest region to the source cluster. The following regions are available:

    Europe
    • Ireland
    • Frankfurt
    • London
    Americas
    • Eastern US
    • Western US
    APAC
    • Sydney
  • Due to network latency, the live migration process may not be able to keep up with a source cluster that has an extremely heavy write load. In this situation, you can still migrate directly from the source cluster by pointing the mongomirror tool to the destination Atlas cluster.

  • The live migration process may not be able to keep up with a source cluster whose write workload is greater than what can be transferred and applied to the destination cluster. You may need to scale the destination cluster up to a tier with more processing power, bandwidth or disk IO.

  • You cannot target a Global Cluster as the destination for Live Migration.

    Important

    You cannot modify the destination Atlas cluster once you start the live migration procedure. If you need to scale up the destination cluster, first cancel the live migration procedure, then scale up the cluster and restart the live migration procedure.

MongoDB Users and Roles

Atlas does not migrate any user or role data to the destination cluster.

If the source cluster enforced authentication, you must re-create the credentials used by your applications on the destination Atlas cluster. Atlas uses SCRAM for user authentication. See Configure MongoDB Users for a tutorial on creating MongoDB users in Atlas.

Canceling Live Migration

You can cancel the process at any time by clicking Cancel. Atlas displays the Sharded Cluster Live Import in Progress message for the destination cluster until the cluster is ready for normal access.

If you cancel the live migration procedure before completion, Atlas does not remove any data migrated up to that point. If you restart the live migration procedure using the same Atlas cluster as the destination, Atlas wipes all data from the cluster.

Testing the Destination Cluster

You may wish to migrate data to your destination cluster, then stop the migration process and test your destination cluster while leaving the source cluster running and serving data to your applications.

To test your destination cluster with production data, follow the migration procedure as far as step 2. When you’re ready to perform the complete migration process, skip step 2 and proceed to step 3.

Avoid Namespace Changes

You should not make any namespace changes during the migration process, such as using the renameCollection command or executing an aggregation pipeline that includes the $out aggregation stage.

Destination Cluster Network Access

During Live Migration, the mongos processes on the destination cluster are shut down and cluster connectivity via the mongos servers is suspended. The mongos processes restart automatically once migration is complete.

Rolling Restarts

After the migration process is complete, your destination replica set will restart each of its members one at a time. This is called a rolling restart, and as a consequence, a failover will occur on the primary. To ensure a smooth migration, it is recommended that you perform a Test Failover procedure prior to migrating your data to the destination cluster.

Migrate Your Sharded Cluster

Staging and Production Migrations

Consider performing a partial live migration procedure first to create a staging environment before repeating the procedure to create your production environment. The procedure documented below provides a callout for the appropriate time to cancel the procedure and create a staging environment.

Use the staging environment to test application behavior and performance using the latest driver version that supports the MongoDB version of the destination Atlas cluster. Then, repeat the live migration proceedure in full to transition your applications from your source cluster to the Atlas destination cluster.

Important

Avoid making changes to the source cluster configuration while the Live Migration procedure runs, such as removing replica set members or modifying mongod runtime settings like featureCompatibilityVersion.

Pre-Migration Checklist

Before starting the import process:

  • If you don’t already have a destination cluster, create a new Atlas deployment and configure it as needed. For complete documentation on creating an Atlas cluster, see Tutorial: Create a New Cluster.

  • After your Atlas cluster is deployed, ensure that you can connect to it from all client hardware where your applications run. Testing your connection string helps ensure that your data migration process can complete with minimal downtime.

    1. Download and install the mongo shell on a representative client machine, if you don’t already have it.
    2. Connect to your destination cluster using the connection string from the Atlas UI. For more information, see Connect via mongo Shell.

    Once you have verified your connectivity to your target cluster, start the import procedure.

Procedure

1

Start the migration process.

  1. Click the ellipsis button for the destination Atlas cluster. On the Cluster list, the ellipsis button appears beneath the cluster name, as shown below. When you view a cluster’s details, the ellipsis appears on the right-hand side of the screen, next to the Connect and Configuration buttons.

    The Live Import Migration button in the Cluster Modal
  2. Click Migrate Data to this Cluster.

  3. Atlas displays a walk-through screen with instructions on how to proceed with the live migration. Prepare the information as stated in the walk-through screen, then click I’m Ready To Migrate.

  4. Atlas displays a walk-through screen that collects information required to connect to the source cluster.

    • Atlas displays the IP address of the MongoDB application server responsible for your live migration at the top of the walk-through screen. Configure your source cluster firewall to grant access to the displayed IP address.

    • Enter the hostname and port of any mongos of the source sharded cluster into the provided text box. For example, mongos.example.net:27017.

    • If the source cluster enforces authentication, enter a username and password into the provided text boxes.

      See Source Cluster Security for guidance on the user permissions required by Atlas live migration.

    • If the source cluster uses TLS/SSL, toggle the SSL button.

    • If the source cluster uses TLS/SSL and is not using a public Certificate Authority (CA), copy the contents of the source cluster’s CA file into the provided text box.

  5. Click Validate to confirm that Atlas can connect to the source cluster.

    If validation fails, check that:

    • You have granted the Live Migration servers network access on your source cluster firewall.
    • The provided user credentials, if any, exist on the source cluster and have the required permissions.
    • The SSL toggle is enabled only if the source cluster requires it.
    • The CA file provided, if any, is valid and correct.
    • The provided hostnames are valid and reachable over the public internet.
  6. Click Start Migration to start the migration process.

    Atlas displays the live migration progress in the UI. During live migration, you cannot view metrics nor access data for the destination cluster.

    Atlas displays the progress of live migration, including the time remaining for the destination cluster to catch up to the source cluster.

    Click View Progress per Shard to view the sync progress and migration time remaining per shard. If the initial sync process for a given shard fails, you can try to restart the sync by clicking Restart.

When the migration timer and the Prepare to Cutover button turn green, proceed to the next step.

2

(Optional) Test the destination cluster.

Optional. If you wish to skip testing and complete the migration, proceed to step 3.

If you wish to do a dry run of the migration process and test the destination cluster for performance and data integrity, you may optionally click the Cancel button at this point. The source cluster stops syncing data with the destination cluster, but all the transferred data remains, so you can test your applications with the new cluster.

When your testing is complete and you’re ready to perform the complete migration process, begin again from step 1. All the databases and collections which were created during the test run will be deleted and rebuilt.

3

Perform the cutover.

When Atlas detects that the source and destination clusters are nearly in sync, it starts an extendable 72 hour timer to begin the cutover procedure. If the 72 hour period passes, Atlas stops synchronizing with the source cluster. You can extend the time remaining by 24 hours by clicking the Extend time hyperlink below the <time> left to cut over timer.

Important

The cutover procedure requires stopping your application and all writes to the source cluster. Consider scheduling and announcing a maintenance period to minimize interruption of service on the dependent applications.

  1. Once you are prepared to cut your applications over to the destination Atlas cluster, click Prepare to Cutover.

  2. Atlas displays a walk-through screen with instructions on how to proceed with the cutover. The optime gap displays how far behind the destination cluster is compared to the source cluster. You must stop your application and all writes to the source cluster to allow the destination cluster to close the optime gap.

    Perform the steps described in the walk-through screen to cut over your applications to the Atlas cluster. The walk-through screen provides the cluster connection string your applications must use to connect to the Atlas cluster.

    Staging Migration

    If you are creating a staging environment for the purpose of testing your applications against, note the optime gap to identify how far behind your staging environment will be compared to your source cluster.

    Press Cancel to cancel the live migration. Atlas terminates the migration at that point in time, leaving any migrated data in place. Atlas displays the Sharded Cluster Live Import in Progress message for the destination cluster until the cluster is ready for normal access. See Canceling Live Migration for more information on cancelling a live migration procedure.

    Once the cancellation complete, you can test your staging application against the partially migrated data.

  3. Click Cut Over when you have completed the cutover sequence and updated your applications to point at the service cluster. The optime gap must be 0:00 before you can complete the procedure.

    Atlas automatically prepares the Atlas cluster once you complete the cutover sequence. During this time, you cannot access the Atlas cluster. Atlas displays the status of the cluster configuration in the UI.

    Once Atlas displays the cluster as active and ready, you can point your applications at the Atlas cluster and begin performing write operaitons.

    Important

    Write operations issued to the source cluster after the cutover sequence are not mirrored to the destination Atlas cluster. Double check that your applications are pointed at the new Atlas cluster before restarting them.

Migration Support

If you have any questions regarding migration support beyond what is covered in this documentation, or if you encounter an error during migration, please request support through the Atlas UI.

To file a support ticket:

  1. Click Support in the left-hand navigation.
  2. Click Request Support.
  3. For Issue Category, select Help with live migration.
  4. For Priority, select the appropriate priority. For questions, please select Medium Priority. If there was a failure in migration, please select High Priority.
  5. For Request Summary, please include Live Migration in your summary.
  6. For More details, please include any other relevant details to your question or migration error.
  7. Click the Request Support button to submit the form.