Navigation

Multi-Region, Workload Isolation, and Replication Options

Atlas supports adding cluster nodes in different geographic regions with different workload priorities to direct application queries to the most appropriate cluster nodes.

To configure multi-region and workload isolation cluster options, toggle Select Multi-Region, Workload Isolation, and Replication Options (M10+ clusters) to Yes.

Image showing workload isolation settings

The number of availability zones, zones, or fault domains in a region has no effect on the number of MongoDB nodes Atlas can deploy. MongoDB Atlas clusters are always made of replica sets with a minimum of three MongoDB nodes.

AWS Only

If this is the first M10+ dedicated paid cluster for the selected region or regions and you plan on creating one or more VPC peering connections, please review the documentation on VPC Peering Connections before continuing.

Electable Nodes for High Availability

Having additional regions with electable nodes increases availability and helps better withstand data center outages.

The first row in the Electable nodes section lists the Highest Priority region. Atlas prioritizes nodes in this region for primary eligibility. For more information on priority in replica set elections, see Member Priority.

Each electable node can:

  • Participate in replica set elections.
  • Become the primary as long as the majority of nodes in the replica set are available.

Configure Regions for Electable Nodes

Click Add a region to add a new row for region selection and select the region from the dropdown. Specify the desired number of Nodes for the region. The total number of electable nodes across all regions in the cluster must be 3, 5, or 7.

When selecting a Region, regions marked as Recommended provide higher availability compared to other regions. For more information, see:

To remove a region, click the trash icon icon next to that region. You cannot remove the Highest Priority region.

Backup Data Center Location

If this is the first cluster in the project and you intend to enable continuous snapshot backups, Atlas selects the backup data center location for the project based on the geographical location of the cluster’s Highest Priority region. To learn more about how Atlas creates the backup data center, see Fully Managed Backup Service.

Improve the Availability of a Cluster

You can improve the redundancy and availability of a single region by increasing the number of Nodes in that region. In a single-region cluster, that region is your Highest Priority region. If you choose to add another region, you may reconfigure the cluster to use that as your Highest Priority region.

To ensure availability during a full region outage, you need at least one node in three different regions. To ensure availability during a partial region outage, you must have at least three electable nodes in a Recommended region or at least three electable nodes across at least two regions.

Modify the Highest Priority Region in an Active Multi-Region Cluster

If you change the Highest Priority region in an active multi-region cluster, Atlas performs a rolling restart on all nodes in that cluster. This change triggers an election which selects a new PRIMARY in the region specified (assuming that the number of nodes in each region remains the same and nothing else is modified).

Example

If you have an active 5-node cluster with the following configuration:

  • 3 nodes in us-east-1 (the Highest Priority region, housing the primary node)
  • 2 nodes in us-west-2

You can choose to make us-west-2 your Highest Priority region by setting it as the first row of your cluster’s Electable nodes. After the change, Atlas performs a rolling restart on all nodes and elects a new PRIMARY in us-west-2. There is no initial sync or re-provisioning of servers required to make this configuration change.

Read-Only Nodes for Optimal Local Reads

Use read-only nodes to optimize local reads in the nodes’ respective service areas.

Click Add a region to select a region in which to deploy read-only nodes. Specify the desired number of Nodes for the region.

Read-only nodes cannot provide high availability because they cannot participate in elections, or become the primary for their cluster. Read-only nodes have distinct read preference tags that allow you to direct queries to desired regions.

To remove a read-only region, click the trash icon icon next to that region.

Analytics Nodes for Workload Isolation

Use analytics nodes to isolate queries which you do not wish to contend with your operational workload. Analytics nodes are useful for handling data analysis operations, such as reporting queries from BI Connector for Atlas. Analytics nodes have distinct replica set tags which allow you to direct queries to desired regions.

Click Add a region to select a region in which to deploy analytics nodes. Specify the desired number of Nodes for the region.

Analytics nodes cannot participate in elections, or become the primary for their cluster.

To remove an analytics node, click the trash icon icon next to that region.

Considerations

  • Atlas does not guarantee that host names remain consistent with respect to node types during topology changes.

    Example

    If you have a cluster named foo123 containing an analytics node foo123-shard-00-03-a1b2c.mongodb.net:27017, Atlas does not guarantee that specific host name will continue to refer to an analytics node after a topology change, such as scaling a cluster to modify its number of nodes or regions.

  • If you are using the standard connection string format rather than the DNS seedlist format, removing an entire region from an existing cross-region cluster may result in a new connection string. After deploying the changes verify the correct connection string by clicking Connect from the Clusters view.

  • Having a large number of regions or having nodes spread across long distances may lead to long election times or replication lag.

  • For a given region in an Atlas project with multi-region clusters or clusters in multiple regions, there is a limit of 40 MongoDB nodes on all other regions in that project. This limit applies across all cloud service providers and can be raised upon request. GCP regions communicating with each other do not count against this limit.

    Example

    If an Atlas project has 20 nodes in Region A and 20 nodes in Region B, you can deploy no more than 20 additional nodes in that project in any given region. This limit applies even if Region A and Region B are backed by different cloud service providers.

    For Atlas projects where every cluster is deployed to a single region, you cannot create a multi-region cluster in that project if there are already 40 or more nodes in that single region unless you request that the limit be raised.

    Please contact Atlas support for questions or assistance raising this limit. To contact support, click Support from the left-hand navigation bar of the Atlas UI.