Integrate with a Monitoring Service¶
You can use a third-party application to view and analyze performance metrics that Atlas collects about your cluster.
At this time, you can either build a monitoring integration using the Atlas API or integrate Atlas with Datadog.
Build Monitoring Integrations with Atlas API¶
You can build a monitoring integration using the Atlas API monitoring and logs endpoints.
Integrate Atlas with Datadog¶
You can configure Atlas to send metric data about your project to your Datadog dashboards.
If your Atlas project is configured to send alerts and events to Datadog, you do not need to follow this procedure. Atlas sends project metrics to Datadog through the same integration used to send alerts and events.
Datadog integration is only available on
To integrate Atlas with Datadog, you must have a Datadog account and a Datadog API key. Datadog grants you an API key when you first create a Datadog account.
If you do not have an existing Datadog account, you can sign up at https://app.datadoghq.com/signup.
To configure Atlas integration with Datadog:
Navigate to your Project Integrations.¶
- If it is not already displayed, select the organization that contains your desired project from the Organizations menu in the navigation bar.
- If it is not already displayed, select your desired project from the Projects menu in the navigation bar.
- Next to the Projects menu, expand the Options menu, then click Integrations.
Link Datadog to your project using your Datadog API key.¶
- Click Configure for the Datadog integration card.
- Enter your Datadog API key in the input box.
- Select your API Region (United States or Europe)
- Click Save.
Performance Metrics Available to Datadog¶
Datadog tracks the following metric data for your Atlas cluster. The metric names in parentheses are the names used in the Datadog UI . Each metric is measured once per minute.
|Metric Name(s)||Metric Type||Description|
|Process||Number of open connections currently open on the cluster.|
|Process||Total database storage size and data size on the cluster in bytes.|
|Disk||Percentage of time during which requests are being issued to and serviced by the disk partition. Includes requests from all processes, not just MongoDB processes.|
|Process||Number of documents read or written per second.|
|Process||Number of operations per second, separated by operation type.|
|Process||Average operation time in milliseconds, separated by operation type.|
|Process||Ratio measuring number of objects scanned over objects returned. Lower values indicate more efficient queries.|
|System||Percent of time utilized by logical CPUs across various processes for the server. These values are normalized with respect to the number of logical CPU cores.|
|Process||Percent of time utilized by logical CPUs across various processes specific to the MongoDB process in the server. These values are normalized with respect to the number of logical CPU cores.|
|Process||Memory (in |
You can view these metrics on the Opcounters - Repl chart, accessed via Cluster Metrics.
|Process||The average rate of oplog the primary generates in gigabytes per hour.|
|Disk||Measure free disk space and used disk space (in bytes) on the disk partition used by MongoDB.|
|Disk||Measure throughput of IOPS for the disk partition used by MongoDB.|
|Process||Measure average rate of bytes read into and written from WiredTiger's cache.|
|Process||Measure number of bytes of data and number of bytes of dirty data in WiredTiger's cache.|
|Process||Measure number of read and write operations in the storage engine.|