Document toolboxDocument toolbox

Dynatrace collector

Overview

This collector pulls logs from the Dynatrace API and pulls from admin activities endpoint.

Devo collector features

Feature

Details

Feature

Details

Allow parallel downloading (multipod)

  • not allowed

Running environments

  • collector server

  • on-premise

Populated Devo events

  • table

Flattening preprocessing

  • no

Allowed source events obfuscation

  • yes

Data sources

Data source

Description

API endpoint

Collector service name

Devo table

Data source

Description

API endpoint

Collector service name

Devo table

Audit

 

/api/v2/auditlogs

audit

monitor.dynatrace.api.audit_log

Query

Query from any source in your Dynatrace domain

/platform/storage/query/v1/

query

monitor.dynatrace.api.grail_query

For more information on how the events are parsed, visit our page.

Vendor setup

This section will contain all the information required to have an environment ready to be collected. You can find more info in the Dynatrace documentation.

Generate access token

  • Go to Access Tokens.

  • Select Generate new token.

  • Enter a name for your token.

  • Dynatrace doesn't enforce unique token names. You can create multiple tokens with the same name. Be sure to provide a meaningful name for each token you generate. Proper naming helps you to efficiently manage your tokens and perhaps delete them when they're no longer needed.

  • Select the required scopes for the token.

    • logs.read scope

  • Select Generate token.

  • Copy the generated token to the clipboard. Store the token in a password manager for future use.

Minimum configuration required for basic pulling

Although this collector supports advanced configuration, the fields required to retrieve data with basic configuration are defined below.

Audit Service

Enable Audit Logging which is disabled by default

  1. From the Dynatrace Menu, go to Settings > Preferences > Log audit events.

  2. Turn on Log all audit-related system events. Dynatrace retains audit logs for 30 days and automatically deletes them afterwards. You can also enable audit logs via Data privacy API.

  3. Generate an access token:

    1. In the Dynatrace menu, select Access tokens.

    2. Select Generate new token.

    3. Enter a name for your token. Dynatrace doesn't enforce unique token names. You can create multiple tokens with the same name. Be sure to provide a meaningful name for each token you generate. Proper naming helps you to efficiently manage your tokens and perhaps delete them when they're no longer needed.

    4. Select the auditLogs.read scope for the token.

    5. Select Generate token.

    6. Copy the generated token to the clipboard. Store the token in a password manager for future use.

  4. Determine your API base URL. API base URL formats are:

    • Managed / Dynatrace for Government: https://{your-domain}/e/{your-environment-id}

    • SaaS: https://{your-environment-id}.live.dynatrace.com

    • Environment ActiveGate: https://{your-activegate-domain}/e/{your-environment-id}

Dynatrace retains audit logs for 30 days and automatically deletes them afterwards. You can also enable audit logs via Data privacy API.

Grail Query Service

You can create new OAuth clients on the Account Management page in Dynatrace.

  1. Go to Account Management. If you have more than one account, select the account you want to manage.

  2. From the top menu bar, select Identity & access management > OAuth clients.

  3. Select Create client.

  4. Provide the email address of the user who will own the client.

  5. Provide a description for the new client.

  6. Ensure that your client has the required permissions by selecting one or more options during client setup. For reading and writing business events, you require:

    • cloudautomation:logs:read

    • storage:logs:read

    • storage:buckets:read

    • storage:bucket-definitions:read

  7. Select Create client.

Save the generated client secret to a password manager for future use. You will also require the generated client ID when obtaining a bearer token.

More on Dynatrace Tokens Note: This minimum configuration refers exclusively to those specific parameters of this integration. There are more required parameters related to the generic behavior of the collector. Check the setting sections for details.

This minimum configuration refers exclusively to those specific parameters of this integration. There are more required parameters related to the generic behavior of the collector. Check setting sections for details.

Setting

Details

Setting

Details

client_id

The Dynatrace client ID

client_secret

The Dynatrace client secret

access_token

The Dynatrace Access Token "dt0s01.ST2EY72KQINMH574WMNVI7YN.G3DFPBEJYMODIDAEX454M7YWBUVEFOWKPRVMWFASS64NFH52PX6BNDVFFM572RZM"

resource

The Dynatrace Resource "urn:dtaccount:abcd1234-ab12-cd34-ef56-abcdef123456"

See the Accepted authentication methods section to verify what settings are required based on the desired authentication method.

Accepted authentication methods

Authentication Method

Base Url

Client ID

Client Secret

Access Token

Resource

Authentication Method

Base Url

Client ID

Client Secret

Access Token

Resource

Access Token

Required

Required

 

Required

 

Oauth

Required

Required

Required

Required

Required

Run the collector

Once the data source is configured, you can either send us the required information if you want us to host and manage the collector for you (Cloud collector), or deploy and host the collector in your own machine using a Docker image (On-premise collector).

Collector services detail

This section is intended to explain how to proceed with specific actions for services.

In the Audit service: All Dynatrace audit log records are fetched via the audit_log endpoint. The collector continually pulls new events since the last recorded timestamp. A unique hash value is computed for each event and used for deduplication purposes to ensure events are not fetched multiple times in subsequent pulls.

In the Grail service: There is NO deduplication as these are queried by time control pulling from your Dynatrace environment. If any duplicates are found you need to explore how they're getting to your Dynatrace Instance.

All services are tagged by the service they are pulled by.

2023-07-28T20:58:54.656 INFO InputProcess::MainThread -> DynatracePullerSetup(unknown,dynatrace#10001,audit_log#predefined) -> Starting thread 2023-07-28T20:58:54.657 INFO InputProcess::MainThread -> DynatracePuller(dynatrace,10001,audit_log,predefined) - Starting thread 2023-07-28T20:58:54.799 INFO InputProcess::DynatracePullerSetup(unknown,dynatrace#10001,audit_log#predefined) -> Successfully tested fetch from /api/v2/auditlogs. Source is pullable. 2023-07-28T20:58:54.800 INFO InputProcess::DynatracePullerSetup(unknown,dynatrace#10001,audit_log#predefined) -> Setup for module <DynatracePuller> has been successfully executed 2023-07-28T20:58:55.663 INFO InputProcess::DynatracePuller(dynatrace,10001,audit_log,predefined) -> DynatracePuller(dynatrace,10001,audit_log,predefined) Starting the execution of pre_pull() 2023-07-28T20:58:55.665 INFO InputProcess::DynatracePuller(dynatrace,10001,audit_log,predefined) -> Reading persisted data 2023-07-28T20:58:55.666 INFO InputProcess::DynatracePuller(dynatrace,10001,audit_log,predefined) -> Data retrieved from the persistence: {'@persistence_version': 1, 'start_time_in_utc': '2023-07-07T01:23:01Z', 'last_event_time_in_utc': '2023-07-28T19:32:12Z', 'last_ids': ['747ad97811911407a8df10f18a28aa3911ab1064d89a2bc40f33403b11f26be9'], 'next_page_key': None} 2023-07-28T20:58:55.667 INFO InputProcess::DynatracePuller(dynatrace,10001,audit_log,predefined) -> Running the persistence upgrade steps 2023-07-28T20:58:55.668 INFO InputProcess::DynatracePuller(dynatrace,10001,audit_log,predefined) -> Running the persistence corrections steps 2023-07-28T20:58:55.669 INFO InputProcess::DynatracePuller(dynatrace,10001,audit_log,predefined) -> Running the persistence corrections steps 2023-07-28T20:58:55.670 INFO InputProcess::DynatracePuller(dynatrace,10001,audit_log,predefined) -> No changes were detected in the persistence 2023-07-28T20:58:55.671 INFO InputProcess::DynatracePuller(dynatrace,10001,audit_log,predefined) -> DynatracePuller(dynatrace,10001,audit_log,predefined) Finalizing the execution of pre_pull() 2023-07-28T20:58:55.671 INFO InputProcess::DynatracePuller(dynatrace,10001,audit_log,predefined) -> Starting data collection every 600 seconds 2023-07-28T20:58:55.672 INFO InputProcess::DynatracePuller(dynatrace,10001,audit_log,predefined) -> Pull Started 2023-07-28T20:58:55.673 INFO InputProcess::DynatracePuller(dynatrace,10001,audit_log,predefined) -> Fetching all activity logs subject to the following parameters: {'from': '2023-07-28T19:32:12+00:00', 'to': '2023-07-29T00:58:55+00:00', 'pageSize': 1000, 'sort': 'timestamp'} 2023-07-28T20:58:56.010 INFO InputProcess::DynatracePuller(dynatrace,10001,audit_log,predefined) -> No more next_page_key values returned. Setting pull_completed to True. 2023-07-28T20:58:56.019 INFO InputProcess::DynatracePuller(dynatrace,10001,audit_log,predefined) -> Updating the persistence 2023-07-28T20:58:56.020 INFO InputProcess::DynatracePuller(dynatrace,10001,audit_log,predefined) -> (Partial) Statistics for this pull cycle (@devo_pulling_id=1690592335663):Number of requests made: 1; Number of events received: 2; Number of duplicated events filtered out: 1; Number of events generated and sent: 1; Average of events per second: 2.874. After a successful collector’s execution (that is, no error logs were found), you should be able to see the following log message: 2023-07-28T20:58:56.020 INFO InputProcess::DynatracePuller(dynatrace,10001,audit_log,predefined) -> Statistics for this pull cycle (@devo_pulling_id=1690592335663):Number of requests made: 1; Number of events received: 2; Number of duplicated events filtered out: 1; Number of events generated and sent: 1; Average of events per second: 2.871. 2023-07-28T20:58:56.020 INFO InputProcess::DynatracePuller(dynatrace,10001,audit_log,predefined) -> The data is up to date! 2023-07-28T20:58:56.021 INFO InputProcess::DynatracePuller(dynatrace,10001,audit_log,predefined) -> Data collection completed. Elapsed time: 0.358 seconds. Waiting for 599.642 second(s) until the next one

This collector uses persistent storage to download events in an orderly fashion and avoid duplicates. In case you want to re-ingest historical data or recreate the persistence, you can restart the persistence of this collector by following these steps:

  1. Edit the configuration file.

  2. Change the value of the start_time_in_utc parameter to a different one.

  3. Save the changes.

  4. Restart the collector.

The collector will detect this change and will restart the persistence using the parameters of the configuration file or the default configuration in case it has not been provided.

This collector has different security layers that detect both an invalid configuration and abnormal operation. This table will help you detect and resolve the most common errors.

Error Type

Error Id

Error Message

Cause

Solution

Error Type

Error Id

Error Message

Cause

Solution

InitVariablesError

1

Invalid start_time_in_utc: {ini_start_str}. Must be in parseable datetime format.

The configured start_time_in_utc parameter is a non-parseable format.

Update the start_time_in_utc value to have the recommended format as indicated in the guide.

InitVariablesError

2

Invalid start_time_in_utc: {ini_start_str}. Must be in the past..

The configured start_time_in_utc parameter is a future date.

Update the start_time_in_utc value to a past datetime.

SetupError

101

Failed to fetch OAuth token from {token_endpoint}. Exception: {e}.

The provided credentials, base URL, and/or token endpoint is incorrect.

Revisit the configuration steps and ensure that the correct values were specified in the config file.

SetupError

102

Failed to fetch data from {endpoint}. Source is not pullable.

The provided credentials, base URL, and/or token endpoint is incorrect.

Revisit the configuration steps and ensure that the correct values were specified in the config file.

ApiError

401

Error during API call to [API provider HTML error response here]

The server returned an HTTP 401 response.

Ensure that the provided credentials are correct and provide read access to the targeted data.

ApiError

429

Error during API call to [API provider HTML error response here]

The server returned an HTTP 429 response.

The collector will attempt to retry requests (default up to 3 times) and respect back-off headers if they exist. If the collector repeatedly encounters this error, adjust the rate limit and/or contact the API provider to ensure that you have enough quota to complete the data pull.

ApiError

498

Error during API call to [API provider HTML error response here]

The server returned an HTTP 500 response.

If the API returns a 500 but successfully completes subsequent runs then you may ignore this error. If the API repeatedly returns a 500 error, ensure the server is reachable and operational.

Collector operations

This section is intended to explain how to proceed with specific operations of this collector.

Change log

Release

Released on

Release type

Recommendations

Release

Released on

Release type

Recommendations

v1.0.0

Oct 7, 2024

NEW FEATURE

Recommended version