Global Clusters

Atlas Global Clusters use a highly curated implementation of sharded cluster zones to support location-aware read and write operations for globally distributed application instances and clients. Global Clusters support deployment patterns such as:

  • Low-latency read and write operations for globally distributed clients.
  • Uptime protection during partial or full regional outages.
  • Location-aware data storage in specific geographic regions.
  • Workload isolation based on cluster member types.

Atlas supports enabling Global Writes when deploying an M30 or greater sharded cluster. For replica sets, scale the cluster to at least an M30 instance and enable Global Writes. All shard nodes deploy with the selected instance size.


You cannot disable Global Writes for a cluster once deployed.

Screeenshot of the Atlas Global Writes Dialog

Atlas Global Clusters require developers to define single or multi-region Zones, where each zone supports write and read operations from geographically local shards. You can also configure zones to support global low-latency secondary reads. For more information on Global Writes zones, see Global Write Zones and Zone Mapping.

Atlas does not auto-configure or auto-shard collections. Sharded collections must meet specific compatibility requirements to utilize Global Writes. For more information on guidance and requirements for sharding collections for Global Writes, see Sharding Collections for Global Writes. The Atlas Data Explorer supports creating sharded collections with specific validations for Global Writes. For complete documentation, see Shard a Collection for Global Writes in Data Explorer.

Open Ports 27015 to 27017 to Access Atlas Databases

If you use a whitelist on your firewall for network ports, open ports 27015 to 27017 to TCP and UDP traffic on Atlas hosts. This grants your applications access to databases stored on Atlas.

To configure your application-side networks to accept Atlas traffic, we recommend using the Atlas API Get All Clusters endpoint to retrieve mongoURI from the response elements. You can also use the Get All MongoDB Processes endpoint to retrieve cluster hostnames (, etc).

You can parse these hostname values and feed the IP addresses programatically into your application-tier orchestration automation to push firewall updates.

Global Write Zones and Zone Mapping

Each Atlas Global Cluster supports up to 9 distinct zones. Each zone consists of one Highest Priority region and one or more Electable, Read-only, or Analytics regions. The available regions depend on the selected cloud service provider.

Region Type Description
Highest Priority

Region where Atlas deploys the primary replica set member for the shard or shards associated with that zone. Clients can only issue write operations to the primary.

Atlas uses the geographic location of the Highest Priority regions to construct a map of geographically-near countries and subdivisions. The Global Writes cluster uses this map for directing write operations to the correct zone.


To facilitate low-latency local secondary reads of globally distributed data, for each zone in the cluster, add a Read-only node in the Highest Priority region of every other cluster zone.

Electable Region where Atlas deploys electable secondary replica set members for the shard or shards associated to that zone. Electable regions add additional fault tolerance in the event of a partial or total regional outage in the Highest Priority region.
Read-only Region where Atlas deploys non-electable secondary replica set members for supporting secondary read operations.
Analytics Region where Atlas deploys analytics nodes. Analytics nodes are read-only nodes configured with distinct replica set tags. You can use these tags to direct queries to specific regions. As a result, analytics nodes can help isolate reporting queries from your normal operational workload as well as reduce latency for local reads.

For each shard associated to a zone, Atlas distributes the shard nodes across the configured regions. While Atlas allows more than one shard per zone, users should instead consider creating additional zones to address high user volume in a concentrated geographic area.


Atlas supports up to 50 shards per sharded cluster regardless of the number of zones. Contact support by clicking Support from the Atlas UI if you have requirements for more shards in your Global Cluster.

The Atlas cluster builder includes templates for automatically configuring Global Writes zones for the Global Cluster. Each template provides a visual description of the cluster zone configuration, including estimates of geographic latency and coverage. For complete documentation on creating a Global Cluster, see Create a Global Cluster. For more information on Global Writes templates, see the Configure Global Writes Zones Using a Template section of that tutorial.

Sharding Collections for Global Writes

Unsharded collections must meet the following compatibility requirements prior to sharding to utilize Global Writes when sharded:

  • Every document in the collection must include a location field.
  • The value of the location field must be either an ISO-3166-1 alpha 2 country code ("US", "DE", "IN") or a supported ISO-3166-2 subdivision code ("US-DC", "DE-BE", "IN-DL"). Documents that do not match this criteria can be routed to any shard in the cluster. To view the complete list of currently supported country or subdivision codes, visit

For collections that meet the stated requirements, you must shard the collection using the following pattern:

{ "location" : 1, "<secondary_field>" : 1 }

A shard key on the location field alone may result in bottlenecks, especially for workloads where a subset of countries or subdivisions receive the majority of write operations. Atlas Global Writes requires a compound shard key to facilitate efficient distribution of sharded data across the cluster. Atlas Global Cluster shard keys share the same restrictions as MongoDB shard keys; for example, the secondary shard key field cannot be an array.

For guidance on choosing a secondary shard key field and the effect of shard key choice on data distribution, see Choosing a Shard Key. For complete documentation on shard key limitations, see Shard Key Limitations.


You cannot easily change the shard key after sharding, nor can you modify the value of shard key fields in any document in the sharded collection.

The Atlas Data Explorer supports creating sharded collections with specific validations for Global Writes. For complete documentation, see Shard a Collection for Global Writes in Data Explorer.

You can also use the mongo shell to execute the sh.shardCollection(). After sharding the collection, you must use the Atlas Data Explorer to enable Global Writes for that collection. For complete documentation on sharding collections via the Data Explorer, see Shard a Collection for Global Writes in Data Explorer.

Global Cluster Write Operations

Each write to a sharded collection must include the shard key for the operation to succeed. For each document in a write operation, MongoDB uses the location field of the shard key to determine the zone to which to route the data. MongoDB selects a shard associated to that zone as the target for writing the document, facilitating geographically isolated and segmented data storage.

MongoDB can only guarantee this behavior for inserted documents that meet the criteria defined in Sharding Collections for Global Writes. Specifically, MongoDB can route a document whose location field does not conform to ISO-3166-1 alpha 2 or ISO-3166-2 to any shard in the cluster.

Global Cluster Read Operations

MongoDB query routing depends on whether the read operation includes the full shard key and that the location value corresponds to a supported ISO-3166-1 alpha 2 country code ("US", "DE", "IN") or a supported ISO-3166-2 subdivision code ("US-DC", "DE-BE", "IN-DL").

For queries that do include the full shard key and whose location value meets the requirements for Global Writes, MongoDB targets the read operation to the zone which maps to the location value or values specified in the query.

For read operations that do not include the full shard key, or if the location value does not correspond to a supported ISO-3166-1 alpha 2 country code or ISO-3166-2 subdivision code, MongoDB must broadcast the read operation to every zone in the cluster.

For Global Writes zones which have Read-only nodes in geographically distant regions, clients in those regions can query the local Read-only node for that zone by specifying the full shard key as part of the query and issuing the read operation with a Read Preference of nearest.


Secondary reads may return stale data depending on the level of replication lag between the secondary node and the primary. For complete documentation on MongoDB read preference, see Read Preference.

See also

For more information on MongoDB query routing, see mongos.

Sharding Collections without Global Writes

Global Writes clusters support the same Ranged and Hashed sharding strategies as a standard Atlas sharded cluster. For sharded collections whose shard keys and document schema do not support Global Writes, MongoDB distributes the sharded data evenly across the available shards in the cluster with respect to the chosen shard key. Consider using a separate sharded cluster for data that cannot take advantage of Global Writes functionality.

You cannot modify a collection to support Global Writes after sharding. Consider whether you might want to use Global Writes for a collection in the future before choosing an incompatible shard key. For more information on Global Writes sharding requirements, see Sharding Collections for Global Writes.

Unsharded Collections in Global Write Clusters

Global Clusters provide the same support for unsharded collections as a standard Atlas sharded cluster. For each database in the cluster, MongoDB stores its unsharded collections on a primary shard. Use sh.status() from the mongo shell to determine the primary shard for the database.