Live Migrate Your Replica Set to Atlas¶
Atlas can perform a live migration of a source replica set to an Atlas cluster, keeping the cluster in sync with the remote source until you cut your applications over to the Atlas cluster. Once you reach the cutover step in the following procedure, you should stop writes to the source cluster by stopping your application instances, pointing them to the Atlas cluster, and restarting them.
To begin, click on the ellipsis ... button and choose Migrate Data to this Cluster from the dropdown menu.
On the Cluster list, the ellipsis ... button appears beneath the cluster name, as shown below. When you view a cluster's details, the ellipsis ... appears on the right-hand side of the screen, next to the Connect and Configuration buttons.

Restrictions¶
- You cannot select an
M0
(Free Tier) orM2/M5
shared cluster as the source or destination for live migration. To migrate data from anM0
(Free Tier) orM2/M5
shared cluster to a paid cluster, see Modify a Cluster. - Live Migration does not support VPC peering or private endpoints for either the source or destination cluster.
For a procedure on live migrating a sharded cluster, see Live Migrate Your Sharded Cluster to Atlas.
Prerequisites¶
To help ensure a smooth data migration, your source cluster should meet all production cluster recommendations. Check the Operations Checklist and Production Notes before beginning the Live Migration process.
Upgrade Path¶
Atlas live migration supports the following migration paths:
Source Replica Set MongoDB Version | Destination Atlas Replica Set MongoDB Version |
---|---|
2.6 | 3.6 and higher |
3.0 | 3.6 and higher |
3.2 | 3.6 and higher |
3.4 | 3.6 and higher |
3.6 | 3.6 and higher |
4.0 | 4.0 and higher |
4.2 | 4.2 and higher |
4.4 | 4.4 and higher |
Users migrating from a MongoDB 2.6 cluster should take particular care to update and test their applications in context of the destination Atlas cluster. In general, whenever migrating to a newer version of MongoDB, plan for updating and testing your application in context of the destination migration cluster.
Network Access¶
Configure network permissions for the following components:
Source Cluster Firewall Allows Traffic from Live Migration Server¶
Any firewalls for the source cluster must grant the live migration server access to the source cluster.
The Atlas Live Migration process streams data through a MongoDB-controlled application server. Atlas provides the IP ranges of the MongoDB Live Migration servers during the Live Migration process. Grant these IP ranges access to your source cluster. This allows the MongoDB Live Migration server to connect to the source clusters.
Atlas Cluster Allows Traffic from Your Application Servers¶
Atlas allows connections to a cluster from hosts added to the project IP access list. Add the IP addresses or CIDR blocks of your application hosts to the project IP access list. Do this before beginning the migration procedure.
Atlas temporarily adds the IP addresses of the Atlas migration servers to the project IP access list. During the migration procedure, you can't edit or delete this entry. Atlas removes this entry once the procedure completes.
To learn how to add entries to the Atlas IP access list, see Configure IP Access List Entries.
Pre-Migration Validation¶
Atlas performs a number of validation checks on the source and destination cluster before starting the Live Migration procedure.
The source cluster is a replica set.
If the source is a standalone, convert the standalone to a replica set first before using Live Migration.
- The destination Atlas cluster must be a replica set.
Source Cluster Security¶
Atlas only supports SCRAM for connecting to source clusters enforcing authentication.
If the source cluster enforces authentication, create a user with the following privileges:
- Read all databases and collections. The
readAnyDatabase
role on theadmin
database covers this requirement. - Read the oplog.
Various built-in roles provide sufficient privileges. For example:
For source clusters running MongoDB version 3.4 or later a user must have, at a minimum, both
clusterMonitor
andreadAnyDatabase
roles. For example:use admin db.createUser( { user: "mySourceUser", pwd: "mySourceP@$$word", roles: [ "clusterMonitor", "readAnyDatabase" ] } ) For source clusters running MongoDB version 3.2 a user must have, at a minimum, both
clusterManager
andreadAnyDatabase
roles, as well as read access on thelocal
database. This requires a custom role, which you can create with the following commands:use admin db.createRole( { role: "migrate", privileges: [ { resource: { db: "local", collection: "" }, actions: [ "find" ] } ], roles: ["readAnyDatabase", "clusterManager"] } ) db.createUser( { user: "mySourceUser", pwd: "mySourceP@$$word", roles: [ "migrate" ] } ) For source clusters running MongoDB version 2.6 or 3.0 a user must have, at a minimum, the
readAnyDatabase
role. For example:use admin db.createUser( { user: "mySourceUser", pwd: "mySourceP@$$word", roles: [ "readAnyDatabase" ] } )
Specify the user name and password to Atlas when prompted by the Live Migration procedure.
If the source cluster uses a different authentication mechanism, to connect
you can use the Migrate with mongomirror
tool to migrate data from the
source cluster to the destination Atlas cluster.
Index Key Limits¶
If your MongoDB deployment contains indexes with keys which exceed the
Index Key Limit, you must
set the MongoDB server parameter failIndexKeyTooLong
to false
before starting the Live Migration procedure.
Modifying indexes so that they contain no oversized keys is
preferable to setting the failIndexKeyTooLong
server
parameter to false
. See the server manual
for strategies on dealing with oversized index keys.
failIndexKeyTooLong
was deprecated in MongoDB version 4.2 and is removed in MongoDB 4.4
and later.
Considerations¶
Destination Cluster Configuration¶
When configuring the destination cluster, consider the following:
The live migration process streams data through a MongoDB-managed application server. Each server runs on infrastructure hosted in the nearest region to the source cluster. The following regions are available:
- Europe
- Frankfurt
- Ireland
- London
- Americas
- Eastern US
- Western US
- APAC
- Mumbai
- Singapore
- Sydney
- Tokyo
- Due to network latency, the live migration process may not be able to keep up with a source cluster that has an extremely heavy write load. In this situation, you can still migrate directly from the source cluster by pointing the mongomirror tool to the destination Atlas cluster.
- The live migration process may not be able to keep up with a source cluster whose write workload is greater than what can be transferred and applied to the destination cluster. You may need to scale the destination cluster up to a tier with more processing power, bandwidth or disk IO.
- The destination Atlas cluster must be a replica set.
- You cannot select an
M0
(Free Tier) orM2/M5
shared-tier cluster as the destination for live migration. - Do not change the
featureCompatibilityVersion
flag while Atlas Live Migrate is running.
Database Users and Roles¶
Atlas does not migrate any user or role data to the destination cluster.
If the source cluster enforced authentication, you must re-create the credentials used by your applications on the destination Atlas cluster. Atlas uses SCRAM for user authentication. See Configure Database Users for a tutorial on creating database users in Atlas.
Avoid Namespace Changes¶
You should not make any namespace changes during the migration
process, such as using the
renameCollection
command
or executing an aggregation pipeline that includes the
$out
aggregation stage.
Rolling Restarts¶
After the migration process is complete, your destination replica set will restart each of its members one at a time. This is called a rolling restart, and as a consequence, a failover will occur on the primary. To ensure a smooth migration, it is recommended that you perform a Test Failover procedure prior to migrating your data to the destination cluster.
Migrate Your Cluster¶
Consider performing this procedure twice. Perform a partial migration that stops at the Perform the Cutover step first. This creates an up-to-date Atlas-backed staging cluster to test application behavior and performance using the latest driver version that supports the MongoDB version of the Atlas cluster.
Once you have tested your application, perform the full migration procedure using a separate Atlas cluster to create your Atlas-backed production environment.
Avoid making changes to the source cluster configuration while the
Live Migration procedure runs, such as removing replica set members
or modifying mongod
runtime settings
like featureCompatibilityVersion
.
Pre-Migration Checklist¶
Before starting the import process:
- If you don't already have a destination cluster, create a new Atlas deployment and configure it as needed. For complete documentation on creating an Atlas cluster, see Create a New Cluster.
After your Atlas cluster is deployed, ensure that you can connect to it from all client hardware where your applications run. Testing your connection string helps ensure that your data migration process can complete with minimal downtime.
- Download and install the
mongo
shell on a representative client machine, if you don't already have it. - Connect to your destination cluster using the connection string
from the Atlas UI. For more information, see
Connect via
mongo
Shell.
Once you have verified your connectivity to your target cluster, start the import procedure.
- Download and install the
Procedure¶
Start the migration process.¶
Click the ellipsis ... button for the destination Atlas cluster. On the Cluster list, the ellipsis ... button appears beneath the cluster name, as shown below. When you view a cluster's details, the ellipsis ... appears on the right-hand side of the screen, next to the Connect and Configuration buttons.
- Click Migrate Data to this Cluster.
Atlas displays a walk-through screen with instructions on how to proceed with the live migration. The process copies the data from your source cluster to the new target cluster. After you complete the walk-through, you can point your application to the new cluster.
You will need the following details for your source cluster to facilitate the migration:
- The hostname and port of the source cluster's primary member
- The username and password used to connect to the source cluster
- If the source cluster uses
TLS/SSL
and is not using a public Certificate Authority (CA), you will need the source cluster's CA file.
Prepare the information as stated in the walk-through screen, then click I'm Ready To Migrate.
Atlas displays a walk-through screen that collects information required to connect to the source cluster.
- Atlas displays the IP address of the MongoDB application server responsible for your live migration at the top of the walk-through screen. Configure your source cluster firewall to grant access to the displayed IP address.
- Enter the hostname and port of the primary member of the source cluster
into the provided text box. For example,
mongoPrimary.example.net:27017
. If the source cluster enforces authentication, enter a username and password into the provided text boxes.
See Source Cluster Security for guidance on the user permissions required by Atlas live migration.
- If the source cluster uses
TLS/SSL
, toggle theSSL
button. - If the source replica set uses
TLS/SSL
and is not using a public Certificate Authority (CA), copy the contents of the source cluster's CA file into the provided text box. - If you wish to drop all collections on the target cluster before beginning the migration process, toggle the switch marked Clear any existing data on your target cluster? to Yes.
Click Validate to confirm that Atlas can connect to the source replica set.
If validation fails, check that:
- You have added Atlas to the IP access list on your source cluster.
- The provided user credentials, if any, exist on the source cluster and have the required permissions.
- The SSL toggle is enabled only if the source cluster requires it.
- The CA file provided, if any, is valid and correct.
Click Start Migration to start the migration process.
Once the migration process begins, the Atlas UI displays the Migrating Data walk-through screen for the destination Atlas cluster.
The walk-through screen updates as the destination cluster proceeds through the migration process. The migration process includes:
- Copying collections from source to destination.
- Creating indexes on the destination.
- Tailing of oplog entries from the source cluster.
A lag time value is displayed during the final oplog tailing phase that represents the current lag between the source and destination clusters. This lag time may fluctuate depending on the rate of oplog generation on the source, but should decrease over time as oplog entries are copied to the destination.
When the lag timer and the Prepare to Cutover button turn green, proceed to the next step.
Perform the cutover.¶
When Atlas detects that the source and destination clusters are nearly in sync, it starts an extendable 72 hour timer to begin the cutover procedure. If the 72 hour period passes, Atlas stops synchronizing with the source cluster. You can extend the time remaining by 24 hours by clicking the Extend time hyperlink below the <time> left to cut over timer.
- Once you are prepared to cut your applications over to the destination Atlas cluster, click Prepare to Cutover.
Atlas displays a walk-through screen with instructions on how to proceed with the cutover. These steps are also outlined below:
- Stop your application. This ensures that no additional writes are generated to the source cluster.
- Wait for the optime gap to reach zero. When the counter reaches zero, the source and destination clusters are in sync.
- Restart your application using the new connection string provided in step 3 of the Live Migrate cutover UI.
Once you have completed the cutover procedure and confirmed your applications are working normally with the Atlas cluster, click Cut Over to complete the migration procedure. This allows Atlas to:
- Mark the migration plan as complete.
- Remove the Application Server subnets from the destination cluster IP access list.
- Remove the database user that Live Migrate used to import data to the destination cluster.
Migration Support¶
If you have any questions regarding migration support beyond what is covered in this documentation, or if you encounter an error during migration, please request support through the Atlas UI.
To file a support ticket:
- Click Support in the left-hand navigation.
- Click Request Support.
- For Issue Category, select
Help with live migration
. - For Priority, select the appropriate priority. For
questions, please select
Medium Priority
. If there was a failure in migration, please selectHigh Priority
. - For Request Summary, please include
Live Migration
in your summary. - For More details, please include any other relevant details to your question or migration error.
- Click the Request Support button to submit the form.