Navigation

FAQ: Networking

On this page

Note

This section applies to M10 or larger clusters only unless specified.

An Atlas cluster's public IPs don't change when you:

  • Scale the cluster vertically.
  • Unpause the cluster.
  • Change the cluster's topology.
  • Terminate then re-deploy a cluster with a lifetime of 12 hours or more within 12-36 hours. To learn more, see Terminate One Cluster.
  • Experience a maintenance event on your cluster.
  • Experience healing event on your cluster.

An Atlas cluster's public IP addresses must change when you:

To find the public IP address for any node in your cluster, use the nslookup tool from the command line. The IP address shown are the Address portion of the output.

$ nslookup ds-shard-00-00-17jcm.mongodb-dev.net
Address: 34.226.104.79

No. An Atlas project, and its clusters, are associated with a region-specific VPC.

Atlas creates a VPC when you deploy the first M10+ dedicated paid cluster to a given provider and region. For multi-region clusters, Atlas creates one VPC per region if there is not already a VPC for that region.

(AWS deployments only) Atlas also creates a VPC when you create a VPC peering connection to an AWS VPC. Atlas creates the VPC in the same region as the peered VPC.

To use a different VPC (that is, on the customer's own cloud infrastructure accounts), you would need to use MongoDB Cloud Manager or Ops Manager.

If your firewall blocks outbound network connections, you must open outbound access from your application environment to MongoDB Atlas. To configure your application-side networks to accept Atlas traffic you can either use the:

You can parse these hostname values and pass the IP addresses programmatically into your application-tier orchestration automation to push firewall updates.

To find the public IP address for any node in your cluster, use the nslookup tool from the command line. The IP address is shown in the Address portion of the output.

$ nslookup ds-shard-00-00-17jcm.mongodb-dev.net
Address: 34.226.104.79

Clusters can span regions and cloud service providers. The total number of nodes in clusters spanning across regions has a specific constraint on a per-project basis.

Atlas limits the total number of nodes in other regions in one project to a total of 40. This total excludes:

  • GCP regions communicating with each other
  • Free clusters or shared clusters

The total number of nodes between any two regions must meet this constraint.

Example

If an Atlas project has nodes in clusters spread across three regions:

  • 30 nodes in Region A
  • 10 nodes in Region B
  • 5 nodes in Region C

You can only add 5 nodes to Region C because:

  1. If you exclude Region C, Region A + Region B = 40.
  2. If you exclude Region B, Region A + Region C = 35, <= 40.
  3. If you exclude Region A, Region B + Region C = 15, <= 40.
  4. Each combination of regions with the added 5 nodes still meets the per-project constraint:

    • Region A + B = 40
    • Region A + C = 40
    • Region B + C = 20

You can't create a multi-region cluster in a project if it has one or more clusters spanning 40 or more nodes in other regions.

Contact Atlas support for questions or assistance with raising this limit.

If you would exceed the cross-region permissions limit when creating a cluster through the Atlas API, the API returns the following error:

{
"error" : 403,
"detail" : "Cannot have more than 40 cross-region network permissions.",
"reason" : "Forbidden"
}

Yes. AWS PrivateLink powers Atlas Private Endpoints. This allows for transitive connectivity. You can use the AWS Transit Gateway with your VPC if you connected your VPC to Atlas via AWS PrivateLink.

Yes. AWS PrivateLink powers Atlas Private Endpoints. This allows for transitive connectivity. You can use AWS Direct Connect with your VPC if you connected your VPC to Atlas via AWS PrivateLink.

Give Feedback

On this page