Navigation

High Availability and Workload Isolation Options

Clusters Can Be Created Using Multiple Cloud Providers

Atlas allows you to create multi-cloud clusters using any combination of cloud providers: AWS, Azure, and GCP.

You can set the nodes in your cluster to use different:

  • cloud providers
  • geographies
  • workload priorities
  • replication configurations

How you apply these options improve the availability and workload balance of your cluster.

To configure node-specific options for your cluster, toggle Multi-Cloud, Multi-Region & Workload Isolation (M10+ clusters) to On.

Multi-Cloud Provider, Multi-Region & Workload Isolation feature

Multi-Region and Multi-Cloud Clusters

The introduction of multi-cloud capabilities in Atlas changes how Atlas defines geographies for a cluster:

multi-region cluster

A cluster that may be hosted in:

  • more than one region within one cloud provider or
  • more than one cloud provider. (A cluster that spans more than one cloud provider spans more than one region by design.)
  • multiple regions within a single cloud provider, or
  • multiple regions across multiple cloud providers.

As each cloud provider has its own set of regions, multi-cloud clusters are also multi-region clusters.

Electable Nodes for High Availability

If you add regions with electable nodes, you:

  • increase data availability and
  • reduce the impact of data center outages.

You may set different regions from one cloud provider or choose different cloud providers.

Atlas sets the node in the first row of the Electable nodes table as the Highest Priority region.

Atlas prioritizes nodes in this region for primary eligibility. Other nodes rank in the order that they appear.

See also

Member Priority.

Each electable node can:

  • Participate in replica set elections.
  • Become the primary while the majority of nodes in the replica set remain available.

Add Electable Nodes

You can add electable nodes in one cloud provider and region from the Electable nodes for high availability section.

To add an electable node:

  1. Click Add a provider/region.

  2. Select the cloud provider from the Provider dropdown.

  3. Select the region from the Region dropdown.

    When you change the Provider option, the Region changes to a blank option. If you don’t select a region, Atlas displays an error when you click Create Cluster.

  4. Specify the desired number of Nodes for the provider and region.

    The total number of electable nodes across all providers and regions in the cluster must equal 3, 5, or 7.

Atlas considers regions marked with a star icon as recommended. These regions provide higher availability compared to other regions.

Remove Electable Nodes

To remove a region, click the trash icon icon to the right side of that region. You cannot remove the Highest Priority region.

Improve the Availability of a Cluster

To improve the redundancy and availability of a cluster, increase the number of electable nodes in that region. Every Atlas cluster has a Highest Priority region. If your cluster spans multiple regions, you can select which cloud provider region should be the Highest Priority.

Consider the following scenarios and how to prevent loss of availability and performance:

Point of Failure How to Prevent this Point of Failure
Cloud Provider Minimum of one node set in all three cloud providers. More than one node per region.
Region Minimum of one node set in three or more different regions. More than one node per region.
Node
  • Three or more electable nodes in a Recommended region or
  • Three or more electable nodes across two or more regions.

Change the Highest Priority Provider/Region

If you change the Highest Priority provider and region in an active multi-region cluster, Atlas performs a rolling restart on all nodes in that cluster. This change triggers an election. The result selects a new primary node in the provider and region specified (assuming that the number of nodes in each provider and region remains the same and nothing else is modified).

Example

If you have an active 5-node cluster with the following configuration:

Nodes Provider Region Priority
3 AWS us-east-1 Highest
2 GCP us-west3  

To make the GCP us-west3 nodes the Highest Priority, drag its row to the first row of your cluster’s Electable nodes. After this change, Atlas performs a rolling restart on all nodes and elects a new PRIMARY in us-west3. Atlas doesn’t start an initial sync or re-provision hosts when changing this configuration.

Important

Certain circumstances may delay an election of a new primary.

Example

A sharded cluster with heavy workloads on its primary shard may delay the election. This results in not having all primary nodes in the same region temporarily.

To minimize these risks, avoid modifying your primary region during periods of heavy workload.

Read-Only Nodes for Optimal Local Reads

Use read-only nodes to optimize local reads in the nodes’ respective service areas.

Add Read-Only Nodes

You can add read-only nodes from the Read-Only Nodes for Optimal Local Reads section.

To add a read-only node in one cloud provider and region:

  1. Click Add a provider/region.

  2. Select the cloud provider from the Provider dropdown.

  3. Select the region from the Region dropdown.

    When you change the Provider option, the Region changes to a blank option. If you don’t select a region, Atlas displays an error when you click Create Cluster.

  4. Specify the desired number of Nodes for the provider and region.

Atlas considers regions marked with a star icon as recommended. These regions provide higher availability compared to other regions.

Read-only nodes don’t provide high availability because they don’t participate in elections. They can’t become the primary for their cluster. Read-only nodes have distinct read preference tags that allow you to direct queries to desired regions.

Remove Read-Only Nodes

To remove all read-only nodes in one cloud provider and region, click the trash icon icon to the right of that cloud provider and region.

Analytics Nodes for Workload Isolation

Use analytics nodes to isolate queries which you do not wish to contend with your operational workload. Analytics nodes help handle data analysis operations, such as reporting queries from BI Connector for Atlas. Analytics nodes have distinct replica set tags which allow you to direct queries to desired regions.

Click Add a region to select a region in which to deploy analytics nodes. Specify the desired number of Nodes for the region.

Add Analytics Nodes

You can add analytics nodes from the Analytics nodes for workload isolation section.

To add analytics nodes in one cloud provider and region:

  1. Click Add a provider/region.

  2. Select the cloud provider from the Provider dropdown.

  3. Select the region from the Region dropdown.

    When you change the Provider option, the Region changes to a blank option. If you don’t select a region, Atlas displays an error when you click Create Cluster.

  4. Specify the desired number of Nodes for the provider and region.

Atlas considers regions marked with a star icon as recommended. These regions provide higher availability compared to other regions.

Analytics nodes don’t provide high availability because they don’t participate in elections. They can’t become the primary for their cluster.

Remove Analytics Nodes

To remove all analytics nodes in one cloud provider and region, click the trash icon icon to the right of that cloud provider and region.

Considerations

  • Atlas does not guarantee that host names remain consistent with respect to node types during topology changes.

    Example

    If you have a cluster named foo123 containing an analytics node foo123-shard-00-03-a1b2c.mongodb.net:27017, Atlas does not guarantee that specific host name will continue to refer to an analytics node after a topology change, such as scaling a cluster to modify its number of nodes or regions.

  • If you use the standard connection string format rather than the DNS seedlist format, removing an entire region from an existing cross-region cluster may result in a new connection string. Verify the correct connection string after deploying the changes. Click Connect from the Clusters view.

  • Having a large number of regions or having nodes spread across long distances may lead to long election times or replication lag.

  • For a given region in an Atlas project with multi-region clusters or clusters in multiple regions, there is a limit of 40 MongoDB nodes on all other regions in that project. This limit applies across all cloud service providers and can be raised upon request. GCP regions communicating with each other do not count against this limit.

    Example

    If an Atlas project has:

    • 30 nodes in Region A
    • 10 nodes in Region B
    • 5 nodes in Region C

    You can no longer add any nodes to your project in Region A or Region B. This is because the nodes in those clusters add up to 40, which is the maximum allowed per project. You can add up to 5 nodes in Region C while still satisfying the project limit.

    This limit applies even if Regions A, B, and C are backed by different cloud service providers.

    For Atlas projects where every cluster is deployed to a single region, you cannot create a multi-region cluster in that project if there are already 40 or more nodes in that single region unless you request that the limit be raised.

    Please contact Atlas support for questions or assistance with raising this limit.

  • If you plan on creating one or more VPC peering connections on your first M10+ dedicated paid cluster for the selected region or regions, first review the documentation on VPC Peering Connections.

  • Atlas provides built-in custom write concerns for multi-region clusters. Use these write concerns to ensure your write operations propagate to a desired number of regions, thereby ensuring data consistency across your regions.

  • The number of availability zones, zones, or fault domains in a region has no effect on the number of MongoDB nodes Atlas can deploy. MongoDB Atlas clusters are always made of replica sets with a minimum of three MongoDB nodes.