Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Current »

Overview

This collector pulls logs from the Dynatrace API and pulls from admin activities endpoint.

Devo collector features

Feature

Details

Allow parallel downloading (multipod)

  • not allowed

Running environments

  • collector server

  • on-premise

Populated Devo events

  • table

Flattening preprocessing

  • no

Allowed source events obfuscation

  • yes

Data sources

Data source

Description

API endpoint

Collector service name

Devo table

Audit

/api/v2/auditlogs

audit

monitor.dynatrace.api.audit_log

Query

Query from any source in your Dynatrace domain

/platform/storage/query/v1/

query

monitor.dynatrace.api.grail_query

For more information on how the events are parsed, visit our page.

Vendor setup

This section will contain all the information required to have an environment ready to be collected. You can find more info in the Dynatrace documentation.

Generate access token

  • Go to Access Tokens.

  • Select Generate new token.

  • Enter a name for your token.

  • Dynatrace doesn't enforce unique token names. You can create multiple tokens with the same name. Be sure to provide a meaningful name for each token you generate. Proper naming helps you to efficiently manage your tokens and perhaps delete them when they're no longer needed.

  • Select the required scopes for the token.

    • logs.read scope

  • Select Generate token.

  • Copy the generated token to the clipboard. Store the token in a password manager for future use.

Minimum configuration required for basic pulling

Although this collector supports advanced configuration, the fields required to retrieve data with basic configuration are defined below.

Audit Service

Enable Audit Logging which is disabled by default

  1. From the Dynatrace Menu, go to Settings > Preferences > Log audit events.

  2. Turn on Log all audit-related system events. Dynatrace retains audit logs for 30 days and automatically deletes them afterwards. You can also enable audit logs via Data privacy API.

  3. Generate an access token:

    1. In the Dynatrace menu, select Access tokens.

    2. Select Generate new token.

    3. Enter a name for your token. Dynatrace doesn't enforce unique token names. You can create multiple tokens with the same name. Be sure to provide a meaningful name for each token you generate. Proper naming helps you to efficiently manage your tokens and perhaps delete them when they're no longer needed.

    4. Select the auditLogs.read scope for the token.

    5. Select Generate token.

    6. Copy the generated token to the clipboard. Store the token in a password manager for future use.

  4. Determine your API base URL. API base URL formats are:

    • Managed / Dynatrace for Government: https://{your-domain}/e/{your-environment-id}

    • SaaS: https://{your-environment-id}.live.dynatrace.com

    • Environment ActiveGate: https://{your-activegate-domain}/e/{your-environment-id}

Dynatrace retains audit logs for 30 days and automatically deletes them afterwards. You can also enable audit logs via Data privacy API.

Grail Query Service

You can create new OAuth clients on the Account Management page in Dynatrace.

  1. Go to Account Management. If you have more than one account, select the account you want to manage.

  2. From the top menu bar, select Identity & access management > OAuth clients.

  3. Select Create client.

  4. Provide the email address of the user who will own the client.

  5. Provide a description for the new client.

  6. Ensure that your client has the required permissions by selecting one or more options during client setup. For reading and writing business events, you require:

    • cloudautomation:logs:read

    • storage:logs:read

    • storage:buckets:read

    • storage:bucket-definitions:read

  7. Select Create client.

Save the generated client secret to a password manager for future use. You will also require the generated client ID when obtaining a bearer token.

More on Dynatrace Tokens Note: This minimum configuration refers exclusively to those specific parameters of this integration. There are more required parameters related to the generic behavior of the collector. Check the setting sections for details.

This minimum configuration refers exclusively to those specific parameters of this integration. There are more required parameters related to the generic behavior of the collector. Check setting sections for details.

Setting

Details

client_id

The Dynatrace client ID

client_secret

The Dynatrace client secret

access_token

The Dynatrace Access Token "dt0s01.ST2EY72KQINMH574WMNVI7YN.G3DFPBEJYMODIDAEX454M7YWBUVEFOWKPRVMWFASS64NFH52PX6BNDVFFM572RZM"

resource

The Dynatrace Resource "urn:dtaccount:abcd1234-ab12-cd34-ef56-abcdef123456"

See the Accepted authentication methods section to verify what settings are required based on the desired authentication method.

Accepted authentication methods

Authentication Method

Base Url

Client ID

Client Secret

Access Token

Resource

Access Token

Required

Required

Required

Oauth

Required

Required

Required

Required

Required

Run the collector

Once the data source is configured, you can either send us the required information if you want us to host and manage the collector for you (Cloud collector), or deploy and host the collector in your own machine using a Docker image (On-premise collector).

Collector services detail

This section is intended to explain how to proceed with specific actions for services.

 Event deduplication

In the Audit service: All Dynatrace audit log records are fetched via the audit_log endpoint. The collector continually pulls new events since the last recorded timestamp. A unique hash value is computed for each event and used for deduplication purposes to ensure events are not fetched multiple times in subsequent pulls.

In the Grail service: There is NO deduplication as these are queried by time control pulling from your Dynatrace environment. If any duplicates are found you need to explore how they're getting to your Dynatrace Instance.

 Devo categorization and destination

All services are tagged by the service they are pulled by.

 Setup/Puller Output
2023-07-28T20:58:54.656    INFO InputProcess::MainThread -> DynatracePullerSetup(unknown,dynatrace#10001,audit_log#predefined) -> Starting thread
2023-07-28T20:58:54.657    INFO InputProcess::MainThread -> DynatracePuller(dynatrace,10001,audit_log,predefined) - Starting thread
2023-07-28T20:58:54.799    INFO InputProcess::DynatracePullerSetup(unknown,dynatrace#10001,audit_log#predefined) -> Successfully tested fetch from /api/v2/auditlogs. Source is pullable.
2023-07-28T20:58:54.800    INFO InputProcess::DynatracePullerSetup(unknown,dynatrace#10001,audit_log#predefined) -> Setup for module <DynatracePuller> has been successfully executed


2023-07-28T20:58:55.663    INFO InputProcess::DynatracePuller(dynatrace,10001,audit_log,predefined) -> DynatracePuller(dynatrace,10001,audit_log,predefined) Starting the execution of pre_pull()
2023-07-28T20:58:55.665    INFO InputProcess::DynatracePuller(dynatrace,10001,audit_log,predefined) -> Reading persisted data
2023-07-28T20:58:55.666    INFO InputProcess::DynatracePuller(dynatrace,10001,audit_log,predefined) -> Data retrieved from the persistence: {'@persistence_version': 1, 'start_time_in_utc': '2023-07-07T01:23:01Z', 'last_event_time_in_utc': '2023-07-28T19:32:12Z', 'last_ids': ['747ad97811911407a8df10f18a28aa3911ab1064d89a2bc40f33403b11f26be9'], 'next_page_key': None}
2023-07-28T20:58:55.667    INFO InputProcess::DynatracePuller(dynatrace,10001,audit_log,predefined) -> Running the persistence upgrade steps
2023-07-28T20:58:55.668    INFO InputProcess::DynatracePuller(dynatrace,10001,audit_log,predefined) -> Running the persistence corrections steps
2023-07-28T20:58:55.669    INFO InputProcess::DynatracePuller(dynatrace,10001,audit_log,predefined) -> Running the persistence corrections steps
2023-07-28T20:58:55.670    INFO InputProcess::DynatracePuller(dynatrace,10001,audit_log,predefined) -> No changes were detected in the persistence
2023-07-28T20:58:55.671    INFO InputProcess::DynatracePuller(dynatrace,10001,audit_log,predefined) -> DynatracePuller(dynatrace,10001,audit_log,predefined) Finalizing the execution of pre_pull()
2023-07-28T20:58:55.671    INFO InputProcess::DynatracePuller(dynatrace,10001,audit_log,predefined) -> Starting data collection every 600 seconds
2023-07-28T20:58:55.672    INFO InputProcess::DynatracePuller(dynatrace,10001,audit_log,predefined) -> Pull Started
2023-07-28T20:58:55.673    INFO InputProcess::DynatracePuller(dynatrace,10001,audit_log,predefined) -> Fetching all activity logs subject to the following parameters: {'from': '2023-07-28T19:32:12+00:00', 'to': '2023-07-29T00:58:55+00:00', 'pageSize': 1000, 'sort': 'timestamp'}
2023-07-28T20:58:56.010    INFO InputProcess::DynatracePuller(dynatrace,10001,audit_log,predefined) -> No more next_page_key values returned. Setting pull_completed to True.
2023-07-28T20:58:56.019    INFO InputProcess::DynatracePuller(dynatrace,10001,audit_log,predefined) -> Updating the persistence
2023-07-28T20:58:56.020    INFO InputProcess::DynatracePuller(dynatrace,10001,audit_log,predefined) -> (Partial) Statistics for this pull cycle (@devo_pulling_id=1690592335663):Number of requests made: 1; Number of events received: 2; Number of duplicated events filtered out: 1; Number of events generated and sent: 1; Average of events per second: 2.874.


After a successful collector’s execution (that is, no error logs were found), you should be able to see the following log message:

2023-07-28T20:58:56.020    INFO InputProcess::DynatracePuller(dynatrace,10001,audit_log,predefined) -> Statistics for this pull cycle (@devo_pulling_id=1690592335663):Number of requests made: 1; Number of events received: 2; Number of duplicated events filtered out: 1; Number of events generated and sent: 1; Average of events per second: 2.871.
2023-07-28T20:58:56.020    INFO InputProcess::DynatracePuller(dynatrace,10001,audit_log,predefined) -> The data is up to date!
2023-07-28T20:58:56.021    INFO InputProcess::DynatracePuller(dynatrace,10001,audit_log,predefined) -> Data collection completed. Elapsed time: 0.358 seconds. Waiting for 599.642 second(s) until the next one
 Restart the persistence

This collector uses persistent storage to download events in an orderly fashion and avoid duplicates. In case you want to re-ingest historical data or recreate the persistence, you can restart the persistence of this collector by following these steps:

  1. Edit the configuration file.

  2. Change the value of the start_time_in_utc parameter to a different one.

  3. Save the changes.

  4. Restart the collector.

The collector will detect this change and will restart the persistence using the parameters of the configuration file or the default configuration in case it has not been provided.

 Troubleshooting

This collector has different security layers that detect both an invalid configuration and abnormal operation. This table will help you detect and resolve the most common errors.

Error Type

Error Id

Error Message

Cause

Solution

InitVariablesError

1

Invalid start_time_in_utc: {ini_start_str}. Must be in parseable datetime format.

The configured start_time_in_utc parameter is a non-parseable format.

Update the start_time_in_utc value to have the recommended format as indicated in the guide.

InitVariablesError

2

Invalid start_time_in_utc: {ini_start_str}. Must be in the past..

The configured start_time_in_utc parameter is a future date.

Update the start_time_in_utc value to a past datetime.

SetupError

101

Failed to fetch OAuth token from {token_endpoint}. Exception: {e}.

The provided credentials, base URL, and/or token endpoint is incorrect.

Revisit the configuration steps and ensure that the correct values were specified in the config file.

SetupError

102

Failed to fetch data from {endpoint}. Source is not pullable.

The provided credentials, base URL, and/or token endpoint is incorrect.

Revisit the configuration steps and ensure that the correct values were specified in the config file.

ApiError

401

Error during API call to [API provider HTML error response here]

The server returned an HTTP 401 response.

Ensure that the provided credentials are correct and provide read access to the targeted data.

ApiError

429

Error during API call to [API provider HTML error response here]

The server returned an HTTP 429 response.

The collector will attempt to retry requests (default up to 3 times) and respect back-off headers if they exist. If the collector repeatedly encounters this error, adjust the rate limit and/or contact the API provider to ensure that you have enough quota to complete the data pull.

ApiError

498

Error during API call to [API provider HTML error response here]

The server returned an HTTP 500 response.

If the API returns a 500 but successfully completes subsequent runs then you may ignore this error. If the API repeatedly returns a 500 error, ensure the server is reachable and operational.

Collector operations

This section is intended to explain how to proceed with specific operations of this collector.

 Operations to verify collector

Initialization

The initialization module is in charge of setup and running the input (pulling logic) and output (delivering logic) services and validating the given configuration.

A successful run has the following output messages for the initializer module:

2023-01-10T15:22:57.146    INFO MainProcess::MainThread -> Loading configuration using the following files: {"full_config": "config-test-local.yaml", "job_config_loc": null, "collector_config_loc": null}
2023-01-10T15:22:57.146    INFO MainProcess::MainThread -> Using the default location for "job_config_loc" file: "/etc/devo/job/job_config.json"
2023-01-10T15:22:57.147    INFO MainProcess::MainThread -> "\etc\devo\job" does not exists
2023-01-10T15:22:57.147    INFO MainProcess::MainThread -> Using the default location for "collector_config_loc" file: "/etc/devo/collector/collector_config.json"
2023-01-10T15:22:57.148    INFO MainProcess::MainThread -> "\etc\devo\collector" does not exists
2023-01-10T15:22:57.148    INFO MainProcess::MainThread -> Results of validation of config files parameters: {"config": "C:\git\collectors2\devo-collector-<name>\config\config.yaml", "config_validated": True, "job_config_loc": "/etc/devo/job/job_config.json", "job_config_loc_default": True, "job_config_loc_validated": False, "collector_config_loc": "/etc/devo/collector/collector_config.json", "collector_config_loc_default": True, "collector_config_loc_validated": False}
2023-01-10T15:22:57.171 WARNING MainProcess::MainThread -> [WARNING] Illegal global setting has been ignored -> multiprocessing: False

Events delivery and Devo ingestion

The event delivery module is in charge of receiving the events from the internal queues where all events are injected by the pullers and delivering them using the selected compatible delivery method. A successful run has the following output messages for the initializer module:

2023-01-10T15:23:00.788    INFO OutputProcess::MainThread -> DevoSender(standard_senders,devo_sender_0) -> Starting thread
2023-01-10T15:23:00.789    INFO OutputProcess::MainThread -> DevoSenderManagerMonitor(standard_senders,devo_1) -> Starting thread (every 300 seconds)
2023-01-10T15:23:00.790    INFO OutputProcess::MainThread -> DevoSenderManager(standard_senders,manager,devo_1) -> Starting thread
2023-01-10T15:23:00.842    INFO OutputProcess::MainThread -> global_status: {"output_process": {"process_id": 18804, "process_status": "running", "thread_counter": 21, "thread_names": ["MainThread", "pydevd.Writer", "pydevd.Reader", "pydevd.CommandThread", "pydevd.CheckAliveThread", "DevoSender(standard_senders,devo_sender_0)", "DevoSenderManagerMonitor(standard_senders,devo_1)", "DevoSenderManager(standard_senders,manager,devo_1)", "OutputStandardConsumer(standard_senders_consumer_0)",

Sender services

The Integrations Factory Collector SDK has 3 different sender services depending on the event type to deliver (internal, standard, and lookup). This collector uses the following Sender Services:

Logging trace

Description

Number of available senders: 1

Displays the number of concurrent senders available for the given Sender Service.

Sender manager internal queue size: 0

Displays the items available in the internal sender queue.

This value helps detect bottlenecks and needs to increase the performance of data delivery to Devo. This last can be made by increasing the concurrent senders.

Total number of messages sent: 44, messages sent since "2022-06-28 10:39:22.511671+00:00": 21 (elapsed 0.007 seconds)

Displays the number of events from the last time the collector executed the pull logic. Following the given example, the following conclusions can be obtained:

  • 44 events were sent to Devo since the collector started.

  • The last checkpoint timestamp was 2022-06-28 10:39:22.511671+00:00.

  • 21 events were sent to Devo between the last UTC checkpoint and now.

  • Those 21 events required 0.007 seconds to be delivered.

    By default these traces will be shown every 10 minutes.

Sender statistics

Each service displays its own performance statistics that allow checking how many events have been delivered to Devo by type:

Logging trace

Description

Number of available senders: 1

Displays the number of concurrent senders available for the given Sender Service

Sender manager internal queue size: 0

Displays the items available in the internal sender queue.

Standard - Total number of messages sent: 57, messages sent since "2023-01-10 16:09:16.116750+00:00": 0 (elapsed 0.000 seconds

Displays the number of events from the last time the collector executed the pull logic. Following the given example, the following conclusions can be obtained:

  • 44 events were sent to Devo since the collector started.

  • The last checkpoint timestamp was 2023-01-10 16:09:16.116750+00:00.

  • 21 events were sent to Devo between the last UTC checkpoint and now.

  • Those 21 events required 0.00 seconds to be delivered.

 Check memory usage

To check the memory usage of this collector, look for the following log records in the collector which are displayed every 5 minutes by default, always after running the memory-free process.

  • The used memory is displayed by running processes and the sum of both values will give the total used memory for the collector.

  • The global pressure of the available memory is displayed in the global value.

  • All metrics (Global, RSS, VMS) include the value before freeing and after previous -> after freeing memory

  INFO InputProcess::MainThread -> [GC] global: 20.4% -> 20.4%, process: RSS(34.50MiB -> 34.08MiB), VMS(410.52MiB ->
  410.02MiB)
  INFO OutputProcess::MainThread -> [GC] global: 20.4% -> 20.4%, process: RSS(28.41MiB -> 28.41MiB), VMS(705.28MiB ->
  705.28MiB)

Change log

Release

Released on

Release type

Recommendations

v1.0.0

NEW FEATURE

Recommended version

  • No labels