Document toolboxDocument toolbox

Darktrace Respond collector

[ Overview ] [ Devo collector features ] [ Data sources ] [ Flattening preprocessing ] [ Vendor setup ] [ Minimum configuration required for basic pulling ] [ Accepted authentication methods ] [ Run the collector ] [ Collector services detail ] [ Collector operations ] [ Change log ]

Overview

Darktrace RESPOND works autonomously to disarm attacks whenever they occur. Reacts to threats in seconds, working 24/7 as it frees up security teams and resources.

Darktrace Self-Learning AI delivers precise information about what’s not normal to your organization. Darktrace RESPOND takes precise action to neutralize threats against any and every asset, no matter where data resides.

Devo collector features

Feature

Details

Feature

Details

Allow parallel downloading (multipod)

not allowed

Running environments

  • collector server

  • on-premise

Populated Devo events

table

Flattening preprocessing

yes

Allowed source events obfuscation

yes

Data sources

Data source

Description

API endpoint

Collector service name

Devo table

Available from release

Data source

Description

API endpoint

Collector service name

Devo table

Available from release

Antigena Actions

Gives information about current and past Darktrace RESPOND/Network (formerly Antigena Network) actions.

/antigena

antigena

edr.darktrace.respond.antigena

v1.0.0

AiAnalyst Incident Events

Provides access to AI Analyst events - a group of anomalies or network activity investigated by Cyber AI Analyst.

/aianalyst/incidentevents

aianalyst_incidentevents

edr.darktrace.respond.incident_event

v1.0.0

Summary Statistics

Returns simple statistics on device counts, processed bandwidth and the number of active Darktrace RESPOND actions.

/summarystatistics

summarystatistics

edr.darktrace.respond.summary

v1.0.0

Status

Detailed system health information from the Status page .

/status

status

edr.darktrace.respond.status

v1.0.0

Modelbreaches

Returns a time-sorted list of model breaches, filtered by the specified parameters.

/modelbreaches

modelbreaches

edr.darktrace.respond.model_breach

v1.0.1

For more information on how the events are parsed, visit our page.

Flattening preprocessing

Data source

Collector service

Optional

Flattening details

Data source

Collector service

Optional

Flattening details

Status

status

Yes

Original :

"time":"2023-05-31 09:37", "installed":"2022-06-15", "mobileAppConfigured":false, "version":"6.0.32 (a1c388)", "ipAddress":"172.27.24.26", "modelsUpdated":"2023-05-30 18:46:46", "modelPackageVersion":"6.0.23-1019~20230530170515~g6d5204", "bundleVersion":"60076", "instances":{ "1":{ "id":1, "downCount":1, "upCount":1, "downTimeMs":1685514150000000, "downTime":"2023-05-31 06:22:30", "version":"6.0.32 (a1c388)", "ipAddress":"172.27.24.26", } "2":{ "id":2, "downCount":8, "upCount":8, "downTimeMs":1685516996000000, "downTime":"2023-05-31 07:09:56", "version":"6.0.32 (a1c388)", "ipAddress":"10.32.1.99", } }

Result:

"time":"2023-05-31 09:37", "installed":"2022-06-15", "mobileAppConfigured":false, "version":"6.0.32 (a1c388)", "ipAddress":"172.27.24.26", "modelsUpdated":"2023-05-30 18:46:46", "modelPackageVersion":"6.0.23-1019~20230530170515~g6d5204", "bundleVersion":"60076", 'instance_info': {'id': 2, 'downCount': 8, 'upCount': 8, 'downTimeMs': 1685516996000000, 'downTime': '2023-05-31 07:09:56', 'version': '6.0.32 (a1c388)', 'ipAddress': '10.32.1.99'}

Vendor setup

A Darktrace user with Unrestricted Devices and Visualizer role.

Action

Steps

Obtain an API-Token Pair

  1. Login to the Darktrace portal with your email and password.

  2. Navigate to the System Config page on the Threat Visualizer of the instance you wish to request data from. Select “Settings” from the left-hand menu.

  3. Locate the ‘API Token’ subsection and click ‘New’.

  4. Two values will be displayed, a Public and Private token, the Private token will not be displayed again.

Both tokens are required to generate the DT-API-Signature value, which must be passed with every API request made to the appliance, so make sure you record them securely.

Minimum configuration required for basic pulling

Although this collector supports advanced configuration, the fields required to retrieve data with basic configuration are defined below.

This minimum configuration refers exclusively to those specific parameters of this integration. There are more required parameters related to the generic behavior of the collector. Check setting sections for details.

Setting

Details

Setting

Details

base_url

The Darktrace Respond API base URL (for example, https://euw1-1234-01.cloud.darktrace.com ). 

public_token

The token obtained from Daktrace Respond for authentication.

private_token

The token obtained from Daktrace Respond for authentication.

Accepted authentication methods

Authentication method

Public token

Private token

Base URL

API-token pair

Required

Required

Required

Run the collector

Once the data source is configured, you can either send us the required information if you want us to host and manage the collector for you (Cloud collector), or deploy and host the collector in your own machine using a Docker image (On-premise collector).

Collector services detail

This section is intended to explain how to proceed with specific actions for services.

Events service

Once the collector has been launched, it is important to check if the ingestion is performed in a proper way. To do so, go to the collector’s logs console.

This service has the following components:

Component

Description

Component

Description

Setup

The setup module is in charge of authenticating the service and managing the token expiration when needed.

Puller

The setup module is in charge of pulling the data in a organized way and delivering the events via SDK.

Setup output

A successful run has the following output messages for the setup module:

2024-11-19T11:32:18.920 INFO InputProcess::MainThread -> CollectorDarktracePullerSetup(darktrace#122312,summarystatistics#predefined) -> Starting thread 2024-11-19T11:32:18.920 INFO InputProcess::MainThread -> StatelessServicePuller(darktrace#122312,summarystatistics#predefined) - Starting thread 2024-11-19T11:32:18.922 WARNING InputProcess::StatelessServicePuller(darktrace#122312,summarystatistics#predefined) -> Waiting until setup will be executed 2024-11-19T11:32:18.923 INFO InputProcess::MainThread -> InputMetricsThread -> Started thread for updating metrics values (update_period=10.0) 2024-11-19T11:32:18.945 INFO OutputProcess::MainThread -> DevoSender(lookup_senders,devo_sender_0) -> [EMERGENCY_PERSISTENCE_SYSTEM] There is no data persisted with the latest format, any previous persisted data will be migrated 2024-11-19T11:32:18.945 INFO OutputProcess::MainThread -> DevoSender(lookup_senders,devo_sender_0) -> [EMERGENCY_PERSISTENCE_SYSTEM] No previous persistence file exists to migrate (Version 1), filename_path: "/home/md_tausif/gitlab/devo-collector-darktrace/state/df8895fef2a509cbd87fcc9850dc0c81" 2024-11-19T11:32:18.946 INFO OutputProcess::MainThread -> OutputLookupConsumer(lookup_senders_consumer_0) -> [EMERGENCY_PERSISTENCE_SYSTEM] Created persistence instance, filename_path: /home/md_tausif/gitlab/devo-collector-darktrace/state/not_used/OutputLookupConsumer;lookup_senders;0.json.gz 2024-11-19T11:32:18.946 INFO OutputProcess::MainThread -> OutputLookupConsumer(lookup_senders_consumer_0) -> [EMERGENCY_PERSISTENCE_SYSTEM] There is no data persisted with the latest format, any previous persisted data will be migrated 2024-11-19T11:32:18.947 INFO OutputProcess::MainThread -> OutputLookupConsumer(lookup_senders_consumer_0) -> [EMERGENCY_PERSISTENCE_SYSTEM] No previous persistence file exists to migrate (Version 1), filename_path: "/home/md_tausif/gitlab/devo-collector-darktrace/state/865a79c1b99ad39b22becc235c9732cb" 2024-11-19T11:32:18.947 INFO OutputProcess::MainThread -> DevoSenderManager(internal_senders,manager,devo_us_1) -> [EMERGENCY_PERSISTENCE_SYSTEM] Created persistence instance, filename_path: /home/md_tausif/gitlab/devo-collector-darktrace/state/not_used/DevoSenderManager;internal_senders;devo_us_1.json.gz 2024-11-19T11:32:18.948 INFO OutputProcess::MainThread -> DevoSenderManager(internal_senders,manager,devo_us_1) -> [EMERGENCY_PERSISTENCE_SYSTEM] There is no data persisted with the latest format, any previous persisted data will be migrated 2024-11-19T11:32:18.948 INFO OutputProcess::MainThread -> DevoSenderManager(internal_senders,manager,devo_us_1) -> [EMERGENCY_PERSISTENCE_SYSTEM] No previous persistence file exists to migrate (Version 1), filename_path: "/home/md_tausif/gitlab/devo-collector-darktrace/state/34012509abf2225d01ba2e6297651032" 2024-11-19T11:32:18.949 INFO InputProcess::MainThread -> [GC] global: 24.6% -> 24.7%, process: RSS(62.17MiB -> 62.54MiB), VMS(522.05MiB -> 522.05MiB) 2024-11-19T11:32:18.949 INFO MainProcess::MetricsConsumerThread -> OpenTelemetryServer -> [METRIC] Counter "vendor_requests" created: "Number of requests received from the vendor API", unit: "requests" 2024-11-19T11:32:18.950 INFO MainProcess::MetricsConsumerThread -> OpenTelemetryServer -> [METRIC] Counter "msg_incoming_received" created: "Number of messages received from the vendor API", unit: "1" 2024-11-19T11:32:18.950 INFO OutputProcess::MainThread -> DevoSender(internal_senders,devo_sender_0) -> [EMERGENCY_PERSISTENCE_SYSTEM] Created persistence instance, filename_path: /home/md_tausif/gitlab/devo-collector-darktrace/state/not_used/DevoSender;internal_senders;devo_sender_0.json.gz 2024-11-19T11:32:18.950 INFO MainProcess::MetricsConsumerThread -> OpenTelemetryServer -> [METRIC] Counter "msg_incoming_removed" created: "Number of messages removed by the collector", unit: "1" 2024-11-19T11:32:18.950 INFO MainProcess::MetricsConsumerThread -> OpenTelemetryServer -> [METRIC] Counter "msg_incoming_filtered" created: "Number of messages filtered by the collector", unit: "1" 2024-11-19T11:32:18.951 INFO MainProcess::MetricsConsumerThread -> OpenTelemetryServer -> [METRIC] Counter "msg_enqueued_standard_counter" created: "Number of messages enqueued", unit: "1" 2024-11-19T11:32:18.951 INFO MainProcess::MetricsConsumerThread -> OpenTelemetryServer -> [METRIC] Counter "msg_enqueued_standard_bytes" created: "Number of bytes enqueued", unit: "1" 2024-11-19T11:32:18.951 INFO MainProcess::MetricsConsumerThread -> OpenTelemetryServer -> [METRIC] Counter "msg_enqueued_lookup_counter" created: "Number of messages enqueued", unit: "1" 2024-11-19T11:32:18.951 INFO MainProcess::MetricsConsumerThread -> OpenTelemetryServer -> [METRIC] Counter "msg_enqueued_lookup_bytes" created: "Number of messages enqueued", unit: "1" 2024-11-19T11:32:18.951 INFO MainProcess::MetricsConsumerThread -> OpenTelemetryServer -> [METRIC] Counter "msg_enqueued_internal_counter" created: "Number of messages enqueued in the queue", unit: "1" 2024-11-19T11:32:18.951 INFO OutputProcess::MainThread -> DevoSender(internal_senders,devo_sender_0) -> [EMERGENCY_PERSISTENCE_SYSTEM] There is no data persisted with the latest format, any previous persisted data will be migrated 2024-11-19T11:32:18.952 INFO OutputProcess::MainThread -> DevoSender(internal_senders,devo_sender_0) -> [EMERGENCY_PERSISTENCE_SYSTEM] No previous persistence file exists to migrate (Version 1), filename_path: "/home/md_tausif/gitlab/devo-collector-darktrace/state/4ff7b345dc444ac050cf75f93e5dcb3b" 2024-11-19T11:32:18.952 INFO MainProcess::MetricsConsumerThread -> OpenTelemetryServer -> [METRIC] Counter "msg_enqueued_internal_bytes" created: "Number of messages enqueued in the queue", unit: "1" 2024-11-19T11:32:18.952 INFO MainProcess::MetricsConsumerThread -> OpenTelemetryServer -> [METRIC] Gauge "module_global_status" created: "Global status of current module", unit: "1" 2024-11-19T11:32:18.952 INFO OutputProcess::MainThread -> OutputInternalConsumer(internal_senders_consumer_0) -> [EMERGENCY_PERSISTENCE_SYSTEM] Created persistence instance, filename_path: /home/md_tausif/gitlab/devo-collector-darktrace/state/not_used/OutputInternalConsumer;internal_senders;0.json.gz 2024-11-19T11:32:18.953 INFO OutputProcess::MainThread -> OutputInternalConsumer(internal_senders_consumer_0) -> [EMERGENCY_PERSISTENCE_SYSTEM] There is no data persisted with the latest format, any previous persisted data will be migrated 2024-11-19T11:32:18.953 INFO OutputProcess::MainThread -> OutputInternalConsumer(internal_senders_consumer_0) -> [EMERGENCY_PERSISTENCE_SYSTEM] No previous persistence file exists to migrate (Version 1), filename_path: "/home/md_tausif/gitlab/devo-collector-darktrace/state/10dd360c86621afd5a28a029a0dddcf6" 2024-11-19T11:32:18.953 INFO OutputProcess::MainThread -> DevoSender(standard_senders,devo_sender_0) -> Starting thread 2024-11-19T11:32:18.953 INFO OutputProcess::MainThread -> DevoSenderManagerMonitor(standard_senders,devo_us_1) -> Starting thread (every 300 seconds) 2024-11-19T11:32:18.954 INFO OutputProcess::MainThread -> DevoSenderManager(standard_senders,manager,devo_us_1) -> Starting thread 2024-11-19T11:32:18.954 INFO OutputProcess::DevoSenderManager(standard_senders,manager,devo_us_1) -> [EMERGENCY_PERSISTENCE_SYSTEM] Recovering any available content from the persistence system 2024-11-19T11:32:18.954 INFO OutputProcess::OutputStandardConsumer(standard_senders_consumer_0) -> [EMERGENCY_PERSISTENCE_SYSTEM] Recovering any available content from the persistence system 2024-11-19T11:32:18.954 INFO OutputProcess::MainThread -> DevoSender(lookup_senders,devo_sender_0) -> Starting thread 2024-11-19T11:32:18.955 INFO OutputProcess::MainThread -> DevoSenderManagerMonitor(lookup_senders,devo_us_1) -> Starting thread (every 300 seconds) 2024-11-19T11:32:18.955 INFO OutputProcess::OutputStandardConsumer(standard_senders_consumer_0) -> [EMERGENCY_PERSISTENCE_SYSTEM] Nothing available in the persistence system 2024-11-19T11:32:18.955 INFO OutputProcess::OutputStandardConsumer(standard_senders_consumer_0) -> [EMERGENCY_PERSISTENCE_SYSTEM] Elapsed seconds: 0.00 2024-11-19T11:32:18.955 INFO OutputProcess::MainThread -> DevoSenderManager(lookup_senders,manager,devo_us_1) -> Starting thread 2024-11-19T11:32:18.955 INFO OutputProcess::DevoSenderManager(lookup_senders,manager,devo_us_1) -> [EMERGENCY_PERSISTENCE_SYSTEM] Recovering any available content from the persistence system 2024-11-19T11:32:18.955 INFO OutputProcess::OutputLookupConsumer(lookup_senders_consumer_0) -> [EMERGENCY_PERSISTENCE_SYSTEM] Recovering any available content from the persistence system 2024-11-19T11:32:18.956 INFO OutputProcess::MainThread -> DevoSender(internal_senders,devo_sender_0) -> Starting thread 2024-11-19T11:32:18.956 INFO OutputProcess::OutputLookupConsumer(lookup_senders_consumer_0) -> [EMERGENCY_PERSISTENCE_SYSTEM] Nothing available in the persistence system 2024-11-19T11:32:18.956 INFO OutputProcess::OutputLookupConsumer(lookup_senders_consumer_0) -> [EMERGENCY_PERSISTENCE_SYSTEM] Elapsed seconds: 0.00 2024-11-19T11:32:18.957 INFO OutputProcess::MainThread -> DevoSenderManagerMonitor(internal_senders,devo_us_1) -> Starting thread (every 300 seconds) 2024-11-19T11:32:18.957 INFO OutputProcess::DevoSenderManager(lookup_senders,manager,devo_us_1) -> [EMERGENCY_PERSISTENCE_SYSTEM] Nothing available in the persistence system 2024-11-19T11:32:18.957 INFO OutputProcess::DevoSenderManager(lookup_senders,manager,devo_us_1) -> [EMERGENCY_PERSISTENCE_SYSTEM] Elapsed seconds: 0.00 2024-11-19T11:32:18.957 INFO OutputProcess::MainThread -> DevoSenderManager(internal_senders,manager,devo_us_1) -> Starting thread 2024-11-19T11:32:18.957 INFO OutputProcess::DevoSenderManager(standard_senders,manager,devo_us_1) -> [EMERGENCY_PERSISTENCE_SYSTEM] Nothing available in the persistence system 2024-11-19T11:32:18.958 INFO OutputProcess::DevoSenderManager(standard_senders,manager,devo_us_1) -> [EMERGENCY_PERSISTENCE_SYSTEM] Elapsed seconds: 0.00 2024-11-19T11:32:18.958 INFO OutputProcess::DevoSenderManager(internal_senders,manager,devo_us_1) -> [EMERGENCY_PERSISTENCE_SYSTEM] Recovering any available content from the persistence system 2024-11-19T11:32:18.958 INFO OutputProcess::OutputInternalConsumer(internal_senders_consumer_0) -> [EMERGENCY_PERSISTENCE_SYSTEM] Recovering any available content from the persistence system 2024-11-19T11:32:18.958 INFO OutputProcess::MainThread -> OutputMetricsThread -> Started thread for updating metrics values (update_period=10.0) 2024-11-19T11:32:18.959 INFO OutputProcess::OutputInternalConsumer(internal_senders_consumer_0) -> [EMERGENCY_PERSISTENCE_SYSTEM] Nothing available in the persistence system 2024-11-19T11:32:18.959 INFO OutputProcess::OutputInternalConsumer(internal_senders_consumer_0) -> [EMERGENCY_PERSISTENCE_SYSTEM] Elapsed seconds: 0.00 2024-11-19T11:32:18.960 INFO MainProcess::MetricsConsumerThread -> OpenTelemetryServer -> [METRIC] Counter "msg_sent_counter" created: "Number of messages sent to the defined output", unit: "1" 2024-11-19T11:32:18.960 INFO OutputProcess::DevoSenderManager(internal_senders,manager,devo_us_1) -> [EMERGENCY_PERSISTENCE_SYSTEM] Nothing available in the persistence system 2024-11-19T11:32:18.960 INFO OutputProcess::DevoSenderManager(internal_senders,manager,devo_us_1) -> [EMERGENCY_PERSISTENCE_SYSTEM] Elapsed seconds: 0.00 2024-11-19T11:32:18.978 INFO OutputProcess::MainThread -> [GC] global: 24.7% -> 24.7%, process: RSS(62.03MiB -> 62.16MiB), VMS(1.07GiB -> 1.07GiB) 2024-11-19T11:32:18.979 INFO MainProcess::MetricsConsumerThread -> OpenTelemetryServer -> [METRIC] Counter "msg_sent_bytes" created: "Number of bytes sent to the defined output", unit: "1" 2024-11-19T11:32:19.318 INFO OutputProcess::DevoSender(internal_senders,devo_sender_0) -> Created a sender: {"name": "DevoSender(internal_senders,devo_sender_0)", "url": "collector-eu.devo.io:443", "chain_path": "/home/md_tausif/gitlab/devo-collector-darktrace/certs/chain.crt", "cert_path": "/home/md_tausif/gitlab/devo-collector-darktrace/certs/int-if-integrations-india.crt", "key_path": "/home/md_tausif/gitlab/devo-collector-darktrace/certs/int-if-integrations-india.key", "transport_layer_type": "SSL", "last_usage_timestamp": null, "socket_status": null}, hostname: "2023-apac-0046", session_id: "140102146981120" 2024-11-19T11:32:19.319 INFO OutputProcess::DevoSender(internal_senders,devo_sender_0) -> [EMERGENCY_PERSISTENCE_SYSTEM] Nothing available in the persistence system 2024-11-19T11:32:19.658 INFO InputProcess::CollectorDarktracePullerSetup(darktrace#122312,summarystatistics#predefined) -> Setup for module <StatelessServicePuller> has been successfully executed

Puller output

A successful initial run has the following output messages for the puller module:

Note that the PrePull action is executed only one time before the first run of the Pull action.

After a successful collector’s execution (that is, no error logs found), you will see the following log message:

This collector uses persistent storage to download events in an orderly fashion and avoid duplicates. In case you want to re-ingest historical data or recreate the persistence, you can restart the persistence of this collector by following these steps:

  1. Edit the configuration file.

  2. Change the value of the start_time_in_utc_format parameter to a different one.

  3. Save the changes.

  4. Restart the collector.

The collector will detect this change and will restart the persistence using the parameters of the configuration file or the default configuration in case it has not been provided.

This collector has different security layers that detect both an invalid configuration and abnormal operation. This table will help you detect and resolve the most common errors.

Error type

Error ID

Error Message

Cause

Solution

Error type

Error ID

Error Message

Cause

Solution

SetupError

100

Error occurred while requesting from the Darktrace server. Error message: {e}

Darktrace API call is failing

Ensure that the collector has the necessary permissions to access the Darktrace API and cnotact the developer with exact erro rmessage

SetupError

101

The tokens provided are incorrect. Please specify the correct credentials. Error message {e}

Darktrace API call is failing

Check the credentials and ensure that the collector has the necessary permissions to access the Darktrace API.

SetupError

102

The provided tokens are valid but they do not have the permission to get data: Error message {e}

Darktrace API call is failing

Contact the Developer with exact error message

SetupError

103

Unexpected HTTP error occurred at the Darktrace server. status code, {status_code} error: {e}

Darktrace API call is failing

Contact the Developer with exact error message

PullError

300

HTTP Error occurred while retrieving events from Darktrace server: summary {summary} details {details}

Darktrace API call is failing

Contact the Developer with exact error message

PullError

301

Some error occurred while retrieving events from Darktrace server. Error details: {e}

Darktrace API call is failing

Contact the Developer with exact error message

Collector operations

This section is intended to explain how to proceed with specific operations of this collector.

Initialization

The initialization module is in charge of setup and running the input (pulling logic) and output (delivering logic) services and validating the given configuration.

A successful run has the following output messages for the initializer module:

Events delivery and Devo ingestion

The event delivery module is in charge of receiving the events from the internal queues where all events are injected by the pullers and delivering them using the selected compatible delivery method.

A successful run has the following output messages for the initializer module:

Sender services

The Integrations Factory Collector SDK has 3 different senders services depending on the event type to delivery (internal, standard, and lookup). This collector uses the following Sender Services:

Sender services

Description

Sender services

Description

internal_senders

In charge of delivering internal metrics to Devo such as logging traces or metrics.

standard_senders

In charge of delivering pulled events to Devo.

Sender statistics

Each service displays its own performance statistics that allow checking how many events have been delivered to Devo by type:

Logging trace

Description

Logging trace

Description

Number of available senders: 1

Displays the number of concurrent senders available for the given Sender Service.

sender manager internal queue size: 0

Displays the items available in the internal sender queue.

This value helps detect bottlenecks and needs to increase the performance of data delivery to Devo. This last can be made by increasing the concurrent senders.

Standard - Total number of messages sent: 0, messages sent since "2024-11-19 06:12:18.956130+00:00": 0 (elapsed 0.000 seconds

Displayes the number of events from the last time and following the given example, the following conclusions can be obtained:

  • 44 events were sent to Devo since the collector started.

  • The last checkpoint timestamp was 2023-01-10 16:09:16.116750+00:00.

  • 21 events where sent to Devo between the last UTC checkpoint and now.

  • Those 21 events required 0.007 seconds to be delivered.

To check the memory usage of this collector, look for the following log records in the collector which are displayed every 5 minutes by default, always after running the memory-free process.

  • The used memory is displayed by running processes and the sum of both values will give the total used memory for the collector.

  • The global pressure of the available memory is displayed in the global value.

  • All metrics (Global, RSS, VMS) include the value before freeing and after previous -> after freeing memory

Change log

Release

Released on

Release type

Details

Recommendations

Release

Released on

Release type

Details

Recommendations

v1.1.0

Nov 19, 2024

Improvement

Improvements

  • Upgraded Docker base image to 1.3.1

  • Updated the DCSDK from v1.7.2 to v1.13.1

    • Fixed bug related to module_global_status value in message_metrics

    • PEP8 Cleanup

    • New metric endpoint (http://0.0.0.0:3000/metrics)

    • Changed some metric structure that were already sent to Devo before (devo.collector.metric.*)

    • Improved MacOS compatibility (for the development phase)

    • Updated DevoSDK to version 6.0.0

    • Puller and PullerSetup now have the same id structure

    • Changed some console log traces to DEBUG

    • Fixed bug related to rate_limiter object (the object was not properly internally released)

    • Improved Filesystem persistence behavior

    • python-dateutil==2.8.2 -> python-dateutil==2.9.0.post0

    • Fixed error related with loss of some values in internal messages (collector_name, collector_id and job_id)/li>

    • Improve Controlled stop when InputProcess is killed

    • Change internal queue management for protecting against OOMKr

    • Extracted ModuleThread structure from PullerAbstract

    • Improve Controlled stop when both processes fail to instantiate

    • Fixed error related a ValueError exception not well controlled

    • Fixed error related with loss of some values in internal messages (collector_name, collector_id and job_id)

    • Improve Controlled stop when InputProcess is killed

    • Extracted ModuleThread structure from PullerAbstract

    • Improve Controlled stop when both processes fails to instantiate

    • Upgrade DevoSDK dependency to version v5.4.0

    • error in persistence system

    • changes to make DCSDK compatible with MacOS

    • Added new sender for relay in house + TLS

    • Added persistence functionality for gzip sending buffer

    • Added Automatic activation of gzip sending

    • Improved behaviour when persistence fails

    • Upgraded DevoSDK dependency

    • Fixed console log encoding

    • Restructured python classes

    • Improved behaviour with non-utf8 characters

    • Decreased default size value for internal queues (Redis limitation, from 1GiB to 256MiB)

    • New persistence format/structure (compression in some cases)

    • Removed dmesg execution (It was invalid for docker execution)

    • Added extra check for not valid message timestamps

    • Added extra check for improve the controlled stop

    • Changed default number for connection retries (now 7)

    • Fix for Devo connection retries

    • Updated DevoSDK to v5.1.10

    • Fix for SyslogSender related to UTF-8

    • Enhance of troubleshooting. Trace Standardization, Some traces has been introduced.

    • Introduced a mechanism to detect "Out of Memory killer" situation.

    • Updated DevoSDK to v5.1.9

    • Fixed some bug related to development on MacOS

    • Added an extra validation and fix when the DCSDK receives a wrong timestamp format

    • Added an optional config property for use the Syslog timestamp format in a strict way

    • Fixed error in pyproject.toml related to project scripts endpoint

    • Updated DevoSDK to v5.1.7

    • Introduced pyproject.toml

    • Added requirements-dev.txt

    • Added input metrics

    • Modified output metrics

    • Updated DevoSDK to v5.1.6

    • Standardized exception messages for traceability (Dynatrace related)

    • Added more detail in queue statistics

    • Upgrade internal dependencies

    • Store lookup instances into DevoSender to avoid creation of new instances for the same lookup

    • Ensure service_config is a dict into templates

    • Ensure special characters are properly sent to the platform

    • Changed log level to some messages from info to debug

    • Changed some wrong log messages

    • Upgraded some internal dependencies

Recommended version

v1.0.0

May 11, 2023

FIRST RELEASE


Released the first version of the Darktrace Respond collector.

Recommended version