The CrowdStrike Falcon platform is a powerful solution that includes EDR (Endpoint Detection and Response), next-generation anti-virus, and device control for endpoints. It also provides a whole host of other operational capabilities across IT operations and security including Threat Intelligence.
Check the {base_url} in the config parameters details for further information.
File Vantage
-
filevantage
edr.crowdstrike.falcon_filevantage.change
Description
Collect data about changes to files, folders, and registries with Falcon FileVantage APIs. Store this data to help you meet certain compliance recommendations and requirements as listed in the Sarbanes-Oxley Act, National Institute for Standards and Technology (NIST), Health Insurance Portability and Accountability Act (HIPAA), and others.
The Streaming API provides several types of events.
End point
The endpoints are dynamically generated by following this (simplified) approach:
Once an authentication token has been obtained, a request to {base_url}/sensors/entities/datafeed/v2 is performed to obtain the "Data Feeds".
Check the {base_url} in the config parameters details for further information.
Each Data Feed will contain a URL and a session token. A request to each of these URLs (along with their corresponding token) will return a streaming response in which every non-empty line represents a different event.
Every Data Feed will also contain a "refresh stream" URL, which is accessed every less than 30 minutes.
All the Data Feeds are processed in parallel. The amount of available Data Feeds depend on the CrowdStrike account's configuration.
For more information on how the events are parsed, visit our page.
Available from v1.10.0
Data Source
Subtype
Service
Table
Alerts
-
alerts
edr.crowdstrike.falconstreaming.alert
Description
Alerts are events that occur in an organization which can represent a cybersecurity threat or an attack.
End point
Listing: {base_url}/alerts/queries/alerts/v2
Details: {base_url}/alerts/entities/alerts/GET/v2 Check the {base_url} in the config parameters details for further information.
For more information on how the events are parsed, visit our page.
Alerts are events that occur in an organization which can represent a cybersecurity threat or an attack.
End point
The endpoints are dynamically generated by following this (simplified) approach:
Once an authentication token has been obtained, a request to {base_url}/sensors/entities/datafeed/v2 is performed to obtain the "Data Feeds".
Check the {base_url} in the config parameters details for further information.
Each Data Feed will contain a URL and a session token. A request to each of these URLs (along with their corresponding token) will return a streaming response in which every non-empty line represents a different event.
Every Data Feed will also contain a "refresh stream" URL, which is accessed every less than 30 minutes.
All the Data Feeds are processed in parallel. The amount of available Data Feeds depend on the CrowdStrike account's configuration.
For more information on how the events are parsed, visit our page.
Available from v1.12.0
Data Source
Subtype
Service
Table
Indicators
-
indicators
edr.crowdstrike.falconstreaming.indicators
Description
The Indicators endpoints allows you to query for various types of indicators: indicators related to various adversaries, indicators of a specific confidence level, indicators associated with reports, and so on.
Check the {base_url} in the config parameters details for further information.
Accepted Authentication Methods
Authentication method
Details
user/pass
You will need your client_id_value, which acts as a user, and secret_key_value, which acts as a password, to connect to the API and execute the API request.
Info
Treat Your Secret Key Like A Password
The security of your application is tied to the security of your secret key. Secure it as you would any sensitive credential. Don't share it with unauthorized individuals or email it to anyone under any circumstances.
Vendor setup
In order to configure the Devo | CrowdStrike API Resources collector, you need to create an API client that will be used to authenticate API requests.
...
Finally, copy the Client ID and Client Secret shown on the next screen. You will need these values to configure the collector.
Run the collector
Once the data source is configured, you can either send us the required information if you want us to host and manage the collector for you (Cloud collector), or deploy and host the collector in your own machine using a Docker image (On-premise collector).
Rw ui tabs macro
Rw tab
title
Cloud collector
We use a piece of software called Collector Server to host and manage all our available collectors.
To enable the collector for a customer:
In the Collector ServerGUI, access the domain in which you want this instance to be created
Click Add Collector and find the one you wish to add.
In the Version field, select the latest value.
In the Collector Name field, set the value you prefer (this name must be unique inside the same Collector Server domain).
In the sending method select Direct Send. Direct Send configuration is optional for collectors that create Table events, but mandatory for those that create Lookups.
In the Parameters section, establish the Collector Parameters as follows below:
Replace the placeholders with the required values:
Parameter
Data Type
Requirement
Value Range / Format
Description
short_unique_id
int
Mandatory
Min length: 1
Use this param to give an unique id to this input service.
override_base_url_value
str
Optional
Min length: 1
By default, the base URL is https://api.crowdstrike.com. This parameter allows you to customize the base URL.
Info
This parameter should be removed if it is not used.
client_id_value
str
Mandatory
Min length: 1
User Client ID to authenticate to the service.
secret_key_value
str
Mandatory
Min length: 1
User Secret Key to authenticate to the service.
request_period_in_seconds_value
int
Optional
Must be > 0
By default, this service will run every 600 seconds. This parameter allows you to customize this behavior.the service section of the user config.
Info
This parameter should be removed if it is not used.
start_timestamp_in_epoch_seconds_value
int
Mandatory
Format: Unix timestamps Minimum value: 1609455600 Maximum value: Now()
Initial time period used when fetching data from the endpoint.
Info
Updating this value will produce the loss of all persisted data and current pipelines.
reset_persistence_auth_value
str
Optional
Format: YYYY-MM-DDTHH:mm:ss.SSSZ
Maximum value: current date
This parameter allows you to clear the persistence of the collector and restart the download pipeline. Updating this value will produce the loss of all persisted data and current pipelines.
Info
This parameter should be removed if it is not used.
override_offset_save_batch_size_in_events_value
int
Optional
Minimum value: 1 Maximum value: 1000
Although the stream services uses a streaming API (events are fetched continuously one by one), we send the collected events in batches for better performance. This parameter controls the amount of items to be sent per batch. The default value is 10.
Info
This parameter should be removed if it is not used.
override_max_seconds_after_last_ingestion_value
int
Optional
Minimum value: 1 Maximum value: 1000
If the collector did not ingest a batch of events in the last n seconds, the connection will be closed and all the streams will be restarted. This parameter configures this time span.
Info
This parameter should be removed if it is not used.
<partition_id_value>: The partition ID (0, 1, 2…) that will use this initial offset.
<partition_offset_value>: The initial offset. This offset will not be included in the ingestion (it will start from the next offset).
The CrowdStrike Events Stream has partitions, and each one streams its events, hence managing its event offset. You can specify an initial offset to start receiving events from when querying for events. This parameter allows you to define initial offsets for the initial run of this service or when the state is being reset.
Info
This parameter should be removed if it is not used.
tagging_version_value
str
Optional
A version string (like "1.3.0") or "latest".
This parameter configures the tagging mechanism that every release might introduce.
If you want to keep the original tagging mechanism, remove this parameter.
If you want to use a specific mechanism created for a certain release, set your desired version.
If you want to always have the latest tagging mechanism without having backwards compatibility, use latest.
Info
This parameter should be removed if it is not used.
<lowercased_event_type_value>: Every event's metadata.eventType (lowercased) JSON property.
<fourth_tag_level_value>: The fourth level for the edr.crowdstrike.falconstreaming.{value} tag.
In case you want to have a custom destination tag for certain events that is not covered by default, you can set it up using this parameter.
Info
This parameter should be removed if it is not used.
Rw tab
title
On-premise collector
This data collector can be run in any machine that has the Docker service available because it should be executed as a docker container. The following sections explain how to prepare all the required setup for having the data collector running.
Structure
The following directory structure should be created for being used when running this collector:
In Devo, go to Administration → Credentials → X.509 Certificates, download the Certificate, Private key and Chain CA and save them in devo-collectors-crowdstrikeapi/certs/. Learn more about security credentials in Devo here.
Replace the placeholders with the required values:
Parameter
Data Type
Requirement
Value Range / Format
Description
short_unique_id
int
Mandatory
Min length: 1
Use this param to give an unique id to this input service.
override_base_url_value
str
Optional
Min length: 1
By default, the base URL is https://api.crowdstrike.com. This parameter allows you to customize the base URL.
Info
This parameter should be removed if it is not used.
client_id_value
str
Mandatory
Min length: 1
User Client ID to authenticate to the service.
secret_key_value
str
Mandatory
Min length: 1
User Secret Key to authenticate to the service.
request_period_in_seconds_value
int
Optional
Must be > 0
By default, this service will run every 600 seconds. This parameter allows you to customize this behavior.the service section of the user config.
Info
This parameter should be removed if it is not used.
start_timestamp_in_epoch_seconds_value
int
Mandatory
Format: Unix timestamps Minimum value: 1609455600 Maximum value: Now()
Initial time period used when fetching data from the endpoint.
Info
Updating this value will produce the loss of all persisted data and current pipelines.
reset_persistence_auth_value
str
Optional
Format: YYYY-MM-DDTHH:mm:ss.SSSZ
Maximum value: current date
This parameter allows you to clear the persistence of the collector and restart the download pipeline. Updating this value will produce the loss of all persisted data and current pipelines.
Info
This parameter should be removed if it is not used.
override_offset_save_batch_size_in_events_value
int
Optional
Minimum value: 1 Maximum value: 1000
Although the stream services uses a streaming API (events are fetched continuously one by one), we send the collected events in batches for better performance. This parameter controls the amount of items to be sent per batch. The default value is 10.
Info
This parameter should be removed if it is not used.
override_max_seconds_after_last_ingestion_value
int
Optional
Minimum value: 1 Maximum value: 1000
If the collector did not ingest a batch of events in the last n seconds, the connection will be closed and all the streams will be restarted. This parameter configures this time span.
Info
This parameter should be removed if it is not used.
partition_offset_value
object
Optional
It has the following structure: initial_partition_offsets: <partition_id_value>:<partition_offset_value>
Where:
<partition_id_value>: The partition ID (0, 1, 2…) that will use this initial offset.
<partition_offset_value>: The initial offset. This offset will not be included in the ingestion (it will start from the next offset).
The CrowdStrike Events Stream has partitions, and each one streams its events, hence managing its event offset. You can specify an initial offset to start receiving events from when querying for events. This parameter allows you to define initial offsets for the initial run of this service or when the state is being reset.
Info
This parameter should be removed if it is not used.
tagging_version_value
str
Optional
A version string (like "1.3.0") or "latest".
This parameter configures the tagging mechanism that every release might introduce.
If you want to keep the original tagging mechanism, remove this parameter.
If you want to use a specific mechanism created for a certain release, set your desired version.
If you want to always have the latest tagging mechanism without having backwards compatibility, use latest.
Info
This parameter should be removed if it is not used.
<lowercased_event_type_value>: Every event's metadata.eventType (lowercased) JSON property.
<fourth_tag_level_value>: The fourth level for the edr.crowdstrike.falconstreaming.{value} tag.
In case you want to have a custom destination tag for certain events that is not covered by default, you can set it up using this parameter.
Info
This parameter should be removed if it is not used.
Download the Docker image
The collector should be deployed as a Docker container. Download the Docker image of the collector as a .tgz file by clicking the link in the following table:
Once the Docker image is imported, it will show the real name of the Docker image (including version info). Replace <product_name>, <image_name> and <version> with the proper values.
The Docker image can be deployed on the following services:
Docker
Execute the following command on the root directory <any_directory>/devo-collectors/<product_name>/
Replace <product_name>, <image_name> and <version> with the proper values.
Docker Compose
The following Docker Compose file can be used to execute the Docker container. It must be created in the <any_directory>/devo-collectors/<product_name>/ directory.
To run the container using docker-compose, execute the following command from the <any_directory>/devo-collectors/<product_name>/ directory:
Code Block
IMAGE_VERSION=<version> docker-compose up -d
Note
Replace <product_name>, <image_name> and <version> with the proper values.
Copy
Collector services detail
This section is intended to explain how to proceed with specific actions for services.
...
Expand
title
Restart the persistence
This collector uses persistent storage to download events in an orderly fashion and avoid duplicates. In case you want to re-ingest historical data or recreate the persistence, you can restart the persistence of this collector by following these steps:
Edit the configuration file.
Change the value of the reset_persistence_auth_value to a different one.
Save the changes.
Restart the collector.
The collector will detect this change and will restart the persistence using the parameters of the configuration file.
Troubleshooting
Expand
title
Troubleshooting
This collector has different security layers that detect both an invalid configuration and abnormal operation. This table will help you detect and resolve the most common errors.
Configuration errors
Error Type
Error Id
Error Message
Cause
Solution
InitVariablesError
1-2
Invalid content detected in the configuration
The module_properties setting does not have the right format.
Check the documentation and update the configuration accordingly
InitVariablesError
3-5
Invalid content detected in the configuration
The base_url setting does not have the right format.
Check the documentation and update the configuration accordingly
InitVariablesError
6-7
Invalid content detected in the configuration
The override_base_url setting does not have the right format.
Check the documentation and update the configuration accordingly
InitVariablesError
8-9
Invalid content detected in the configuration
The base_tag setting does not have the right format.
Check the documentation and update the configuration accordingly
InitVariablesError
10-11
Invalid content detected in the configuration
The user_agent setting does not have the right format.
Check the documentation and update the configuration accordingly
InitVariablesError
12-13
Invalid content detected in the configuration
The endpoint setting does not have the right format.
Check the documentation and update the configuration accordingly
InitVariablesError
14-15
Invalid content detected in the configuration
The auth setting does not have the right format.
Check the documentation and update the configuration accordingly
InitVariablesError
16-17
Invalid content detected in the configuration
The event_list setting does not have the right format.
Check the documentation and update the configuration accordingly
InitVariablesError
18-19
Invalid content detected in the configuration
The details settings need to have the right format.
Check the documentation and update the configuration accordingly
InitVariablesError
20-22
Invalid content detected in the configuration
The logs_limit_in_items setting does not have the right format.
Check the documentation and update the configuration accordingly
InitVariablesError
23-24
Invalid content detected in the configuration
The credentials setting does not have the right format.
Check the documentation and update the configuration accordingly
InitVariablesError
25-26
Invalid content detected in the configuration
The client_id setting does not have the right format.
Check the documentation and update the configuration accordingly
InitVariablesError
27-28
Invalid content detected in the configuration
The secret_key setting does not have the right format.
Check the documentation and update the configuration accordingly
InitVariablesError
29-31
Invalid content detected in the configuration
The start_timestamp_in_epoch_seconds setting does not have the right format.
Check the documentation and update the configuration accordingly
InitVariablesError
32-33
Invalid content detected in the configuration
The unique_identifier setting does not have the right format.
Check the documentation and update the configuration accordingly
SetupError
100
Required credentials are invalid
Required credentials are invalid
Include the proper credentials in the configuration
SetupError
101
Service not found
A declared service is not valid
Include the proper service name in the configuration
SetupError
102-103
The token has no access
The generated token cannot access a service list.
Enable the service in the Crowdstrike configuration
SetupError
104-105
The token has no access
The generated token cannot access service details.
Enable the service in the Crowdstrike configuration
Runtime errors
Error Type
Error Id
Error Message
Cause
Solution
PrePullError
200
Error before pulling data
The start time is is newer than the current date
Update the configuration
PullError
300-312
Error pulling data
Error pulling data from the service
Review the error and act accordingly if required.
ApiError
400-403
API error
The API returned an error
Review the error and act accordingly if required.
Collector operations
This section is intended to explain how to proceed with specific operations of this collector.
Expand
title
Operations to verify collector
Initialization
The initialization module is in charge of setup and running the input (pulling logic) and output (delivering logic) services and validating the given configuration. A successful run has the following output messages for the initializer module:
Code Block
2023-01-10T15:22:57.146 INFO MainProcess::MainThread -> Loading configuration using the following files: {"full_config": "config-test-local.yaml", "job_config_loc": null, "collector_config_loc": null}
2023-01-10T15:22:57.146 INFO MainProcess::MainThread -> Using the default location for "job_config_loc" file: "/etc/devo/job/job_config.json"
2023-01-10T15:22:57.147 INFO MainProcess::MainThread -> "\etc\devo\job" does not exists
2023-01-10T15:22:57.147 INFO MainProcess::MainThread -> Using the default location for "collector_config_loc" file: "/etc/devo/collector/collector_config.json"
2023-01-10T15:22:57.148 INFO MainProcess::MainThread -> "\etc\devo\collector" does not exists
2023-01-10T15:22:57.148 INFO MainProcess::MainThread -> Results of validation of config files parameters: {"config": "C:\git\collectors2\devo-collector-<name>\config\config.yaml", "config_validated": True, "job_config_loc": "/etc/devo/job/job_config.json", "job_config_loc_default": True, "job_config_loc_validated": False, "collector_config_loc": "/etc/devo/collector/collector_config.json", "collector_config_loc_default": True, "collector_config_loc_validated": False}
2023-01-10T15:22:57.171 WARNING MainProcess::MainThread -> [WARNING] Illegal global setting has been ignored -> multiprocessing: False
Events delivery and Devo ingestion
The event delivery module is in charge of receiving the events from the internal queues where all events are injected by the pullers and delivering them using the selected compatible delivery method. A successful run has the following output messages for the initializer module:
The Integrations Factory Collector SDK has 3 different sender services depending on the event type to deliver (internal, standard, and lookup). This collector uses the following Sender Services:
Logging trace
Description
Number of available senders: 1
Displays the number of concurrent senders available for the given Sender Service.
Sender manager internal queue size: 0
Displays the items available in the internal sender queue.
This value helps detect bottlenecks and needs to increase the performance of data delivery to Devo. This last can be made by increasing the concurrent senders.
Total number of messages sent: 44, messages sent since "2022-06-28 10:39:22.511671+00:00": 21 (elapsed 0.007 seconds)
Displays the number of events from the last time the collector executed the pull logic. Following the given example, the following conclusions can be obtained:
44 events were sent to Devo since the collector started.
The last checkpoint timestamp was 2022-06-28 10:39:22.511671+00:00.
21 events were sent to Devo between the last UTC checkpoint and now.
Those 21 events required 0.007 seconds to be delivered.
By default these traces will be shown every 10 minutes.
Sender statistics
Each service displays its own performance statistics that allow checking how many events have been delivered to Devo by type:
Logging trace
Description
Number of available senders: 1
Displays the number of concurrent senders available for the given Sender Service
Sender manager internal queue size: 0
Displays the items available in the internal sender queue.
Standard - Total number of messages sent: 57, messages sent since "2023-01-10 16:09:16.116750+00:00": 0 (elapsed 0.000 seconds
Displays the number of events from the last time the collector executed the pull logic. Following the given example, the following conclusions can be obtained:
44 events were sent to Devo since the collector started.
The last checkpoint timestamp was 2023-01-10 16:09:16.116750+00:00.
21 events were sent to Devo between the last UTC checkpoint and now.
Those 21 events required 0.00 seconds to be delivered.
Expand
title
Check memory usage
To check the memory usage of this collector, look for the following log records in the collector which are displayed every 5 minutes by default, always after running the memory-free process.
The used memory is displayed by running processes and the sum of both values will give the total used memory for the collector.
The global pressure of the available memory is displayed in the global value.
All metrics (Global, RSS, VMS) include the value before freeing and after previous -> after freeing memory
Added EPP Detection Summary events as a default service.
Improvements
Updated DCSDK
v1.10.0
Status
colour
Purple
title
New FEATUREs
Upgrade
Expand
title
Details
Feature
Added new service Alerts.
v1.9.1
Status
colour
Blue
title
IMPROVEMENTS
Upgrade
Expand
title
Details
Improvements
Solved CVE-2024-45490, CVE-2024-45491, CVE-2024-45492 by updating docker base image version to 1.3.1.
v1.9.0
Status
colour
Blue
title
IMPROVEMENTS
Upgrade
Expand
title
Details
Improvements
Updated DCSDK from 1.12.2 to 1.12.4
Change internal queue management for protecting against OOMK
Extracted ModuleThread structure from PullerAbstract
Improve Controlled stop when both processes fails to instantiate
Improve Controlled stop when InputProcess is killed
Fixed error related a ValueError exception not well controlled.
Fixed error related with loss of some values in internal messages
v1.8.0
Status
colour
Blue
title
IMPROVEMENTS
Status
colour
Yellow
title
BUG FIXING
Upgrade
Expand
title
Details
Improvements
Updated DCSDK from 1.11.1 to 1.12.2- Updated the DCSDK base image to 1.3.0.
Bug fixing
Fixed duplicated logs in event services.
v1.7.0
Status
colour
Blue
title
IMPROVEMENTS
Status
colour
Yellow
title
BUG FIXING
Upgrade
Expand
title
Details
Improvements
Add compatibility when reading configuration to accept older parameters.
Bug fixing
Fix a bug when getting the estream listing and improve the log message.
v1.6.0
Status
colour
Blue
title
IMPROVEMENTS
Upgrade
Expand
title
Details
Improvements
Updated to DCSDK 1.11.1
Added extra check for not valid message timestamps
Added extra check for improve the controlled stop
Changed default number for connection retries (now 7)
Fix for Devo connection retries
Updated DevoSDK to v5.1.9
Fixed some bug related to development on MacOS
Added an extra validation and fix when the DCSDK receives a wrong timestamp format
Added an optional config property for use the Syslog timestamp format in a strict way
Updated DevoSDK to v5.1.10
Fix for SyslogSender related to UTF-8
Enhance of troubleshooting. Trace Standardization, Some traces has been introduced.
Introduced a mechanism to detect "Out of Memory killer" situation
v1.4.3
Status
colour
Blue
title
IMPROVEMENTS
Upgrade
Expand
title
Details
Improvements:
New functionality, access to File Vantage API
Updated DCSDK from 1.8.0 to 1.10.2:
Upgrade internal dependencies
Store lookup instances into DevoSender to avoid creation of new instances for the same lookup
Ensure service_config is a dict into templates
Ensure special characters are properly sent to the platform
Changed log level to some messages from info to debug
Changed some wrong log messages
Upgraded some internal dependencies
Changed queue passed to setup instance constructor
Added input metrics
Modified output metrics
Updated DevoSDK to version 5.1.6
Standardized exception messages for traceability
Added more detail in queue statistics
Updated PythonSDK to version 5.0.7
Introduced pyproject.toml
Added requirements.dev.txt
Fixed error in pyproject.toml related to project scripts endpoint
v1.4.2
Status
colour
Blue
title
IMPROVEMENTS
Upgrade
Expand
title
Details
Improvements:
Updated DCSDK from 1.7.2 to 1.8.0:
Ability to validate collector setup and exit without pulling any data.
Ability to store in the persistence the messages that couldn't be sent after the collector stopped.
Ability to send messages from the persistence when the collector starts and before the puller begins working.
Ensure special characters are properly sent to the platform.
v1.4.0
Status
colour
Blue
title
IMPROVEMENTS
Status
colour
Yellow
title
BUG FIXING
Upgrade
Expand
title
Details
Improvements:
Added @devo_pulling_id field.
Update the `details` endpoint to use the v2 API (due to v1 deprecation)
Bug Fixing:
Fixed a bug that prevented overriding the base URL.
v1.3.1
Status
colour
Blue
title
IMPROVEMENTS
Upgrade
Expand
title
Details
Improvements:
The RegEx validation has been updated to enforce the HTTP[S] protocol for all services when this parameter is filled in by the user.
The Event Stream (eStream) service has been updated to use the same overriding parameter for the base_url than the other previous services. This allows to the user define this only one time for all available services through override_base_url user config file.
v1.3.0
Status
colour
Blue
title
IMPROVEMENTS
Status
colour
Purple
title
New FEATUREs
Upgrade
Expand
title
Details
Improvements:
Upgraded underlay IFC SDK v1.3.0 to v1.4.0.
Updated the underlying DevoSDK package to v3.6.4 and dependencies, this upgrade increases the resilience of the collector when the connection with Devo or the Syslog server is lost. The collector is able to reconnect in some scenarios without running the self-kill feature.
Support for stopping the collector when a GRACEFULL_SHUTDOWN system signal is received.
Re-enabled the logging to devo.collector.out for Input threads.
Improved self-kill functionality behavior.
Added more details in log traces.
Added log traces for knowing system memory usage.
New Features:
CrowdStrike Event Stream (eStream) data source is now available. This service leverages the CrowdStrike Falcon Event Streams API to obtain the customer’s DataFeed URLs and continuosly fetch events that will be ingested under the edr.crowdstrike.falconstreaming.* family of tables. For more information, check the CrowdStrike’s official documentation.
v1.2.0
Status
colour
Blue
title
IMPROVEMENTS
Upgrade
Expand
title
Details
Improvements:
Upgraded underlay IFC SDK v1.1.3 to v1.3.0.
The resilience has been improved with a new feature that restart the collector when the Devo connections is lost and it cannot be recovered.
When an exception is raised by the Collector Setup, the collector retries after 5 seconds. For consecutive exceptions, the waiting time is multiplied by 5 until hits 1800 seconds, which is the maximum waiting time allowed. No maximum retries are applied.
When an exception is raised by the Collector Pull method, the collector retries after 5 seconds. For consecutive exceptions, the waiting time is multiplied by 5 until hits 1800 seconds, which is the maximum waiting time allowed. No maximum retries are applied.
When an exception is raised by the Collector pre-pull method, the collector retries after 30 seconds. No maximum retries are applied.
v1.1.0
Status
colour
Blue
title
IMPROVEMENTS
Status
colour
Yellow
title
VULNS
Upgrade
Expand
title
Details
Improvements:
The underlay IFC SDK has been updated from v1.1.2 to v1.1.3.
The resilience has been improved with a new feature that restart the collector when the Devo connections is lost and it cannot be recovered.
Vulnerabilities mitigation:
All critical and high vulnerabilities have been mitigated.
v1.0.0
Status
colour
Purple
title
New FEATUREs
-
Expand
title
Details
New Features:
Initial release that includes the following data sources from CrowdStrike API: