Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Table of Contents
minLevel2
maxLevel2
outlinefalse
typeflat
separatorbrackets
printablefalse

Overview

The CrowdStrike Falcon platform is a powerful solution that includes EDR (Endpoint Detection and Response), next-generation anti-virus, and device control for endpoints. It also provides a whole host of other operational capabilities across IT operations and security including Threat Intelligence.

...

Purpose

Use this collector to get intelligence about attacks from CrowdStrike. It should be used with the CrowdStrike Falcon Data Replicator SQS collector, which replicates endpoint logs to Devo.

Devo Collector Features

Feature

Details

Allow parallel downloading (multipod)

  • Not allowed

Running environments

  • Collector Server

  • On Premise

Populated Devo events

  • Table

Flattening pre-processing

  • No

Allowed source events obfuscation

  • No

Data source description

Available from v1.0.0

Data Source

Subtype

Service

Table

Hosts

-

hosts

edr.crowdstrike.falconstreaming.agents

Description

Hosts are endpoints that run the Falcon sensor. You can get information and details about these agents.

End point

  1. Listing: {base_url}/devices/queries/devices/v1

  2. Details: {base_url}/devices/entities/devices/v2

Check the {base_url} in the config parameters details for further information.

Incidents

-

incidents

edr.crowdstrike.falconstreaming.incidents

Description

Incidents are events that occur in an organization which can represent a cybersecurity threat or an attack.

End point

  1. Listing: {base_url}/incidents/queries/incidents/v1

  2. Details: {base_url}/incidents/entities/incidents/GET/v1

Check the {base_url} in the config parameters details for further information.

Spotlight
Vulnerabilities

-

vulnerabilities

  • table: edr.crowdstrike.falconstreaming.vulnerabilities

  • alias: edr.crowdstrike.falcon_spotlight.vulnerabilities

Description

Vulnerabilities are known security risks in an operating system, application, hardware, firmware, or other part of a computing stack.

End point

  1. Listing: {base_url}/spotlight/queries/vulnerabilities/v1

  2. Details: {base_url}/spotlight/entities/vulnerabilities/v2

Check the {base_url} in the config parameters details for further information.

Behaviors

-

behaviors

edr.crowdstrike.falconstreaming.behaviors

Description

Behaviors are patterns of data transmissions in a network that are out of the norm, used to detect anomalies before cyber attacks occur.

End point

  1. Listing: {base_url}/incidents/queries/behaviors/v1

  2. Details: {base_url}/incidents/entities/behaviors/GET/v1

Check the {base_url} in the config parameters details for further information.

File Vantage

-

filevantage

edr.crowdstrike.falcon_filevantage.change

Description

Collect data about changes to files, folders, and registries with Falcon FileVantage APIs. Store this data to help you meet certain compliance recommendations and requirements as listed in the Sarbanes-Oxley Act, National Institute for Standards and Technology (NIST), Health Insurance Portability and Accountability Act (HIPAA), and others.

End point

  1. Listing: {base_url}/filevantage/queries/changes/v2

  2. Details: {base_url}/filevantage/entities/changes/v21

Check the {base_url} in the config parameters details for further information.

For more information on how the events are parsed, visit our page.

Available from v1.3.0

Data Source

Subtype

Service

Table

Event Stream (eStream)

AuthActivity AuditEvent

estream

edr.crowdstrike.falconstreaming.auth_activity

IncidentSummaryEvent

estream

edr.crowdstrike.falconstreaming.incident_summary

RemoteResponseSessionStartEvent RemoteResponseSessionEndEvent

estream

edr.crowdstrike.falconstreaming.remote_response_session

CustomerIOCEvent

estream

edr.crowdstrike.falconstreaming.customer_ioc

Event_ExternalAPIEvent

estream

edr.crowdstrike.falconstreaming.external_api

DetectionSummaryEvent

Status
colourRed
titledeprecated by crowdstrike

estream

edr.crowdstrike.falconstreaming.detection_summary

Status
colourRed
titleuse epp detection summary
See v1.11.0

UserActivityAuditEvent

estream

Depending on the event's event.ServiceName property (in lowercase):

  • groupsedr.crowdstrike.falconstreaming.user_activity_groups

  • devicesedr.crowdstrike.falconstreaming.user_activity_devices

  • detectionsedr.crowdstrike.falconstreaming.user_activity_detections

  • quarantined_filesedr.crowdstrike.falconstreaming.user_activity_quarantined_files

  • ip_whitelistedr.crowdstrike.falconstreaming.user_activity_ip_whitelist

  • prevention_policyedr.crowdstrike.falconstreaming.user_activity_prevention_policy

  • sensor_update_policyedr.crowdstrike.falconstreaming.user_activity_sensor_update_policy

  • device_control_policyedr.crowdstrike.falconstreaming.user_activity_device_control_policy

Description

The Streaming API provides several types of events.

End point

The endpoints are dynamically generated by following this (simplified) approach:

  1. Once an authentication token has been obtained, a request to {base_url}/sensors/entities/datafeed/v2 is performed to obtain the "Data Feeds".

    1. Check the {base_url} in the config parameters details for further information.

  2. Each Data Feed will contain a URL and a session token. A request to each of these URLs (along with their corresponding token) will return a streaming response in which every non-empty line represents a different event.

    1. Every Data Feed will also contain a "refresh stream" URL, which is accessed every less than 30 minutes.

    2. All the Data Feeds are processed in parallel. The amount of available Data Feeds depend on the CrowdStrike account's configuration.

For more information on how the events are parsed, visit our page.

Available from v1.10.0

Data Source

Subtype

Service

Table

Alerts

-

alerts

edr.crowdstrike.falconstreaming.alert

Description

Alerts are events that occur in an organization which can represent a cybersecurity threat or an attack.

End point

  1. Listing: {base_url}/alerts/queries/alerts/v2

  2. Details: {base_url}/alerts/entities/alerts/GET/v2
    Check the {base_url} in the config parameters details for further information.

For more information on how the events are parsed, visit our page.

Available from v1.11.0

Data Source

Subtype

Service

Table

Event Stream (eStream)

EPPDetectionSummaryEvent

estream

edr.crowdstrike.falconstreaming.epp_detection_summary

Description

Alerts are events that occur in an organization which can represent a cybersecurity threat or an attack.

End point

The endpoints are dynamically generated by following this (simplified) approach:

  1. Once an authentication token has been obtained, a request to {base_url}/sensors/entities/datafeed/v2 is performed to obtain the "Data Feeds".

    1. Check the {base_url} in the config parameters details for further information.

  2. Each Data Feed will contain a URL and a session token. A request to each of these URLs (along with their corresponding token) will return a streaming response in which every non-empty line represents a different event.

    1. Every Data Feed will also contain a "refresh stream" URL, which is accessed every less than 30 minutes.

    2. All the Data Feeds are processed in parallel. The amount of available Data Feeds depend on the CrowdStrike account's configuration.

For more information on how the events are parsed, visit our page.

Available from v1.12.0

Data Source

Subtype

Service

Table

Indicators

-

indicators

edr.crowdstrike.falconstreaming.indicators

Description

The Indicators endpoints allows you to query for various types of indicators: indicators related to various adversaries, indicators of a specific confidence level, indicators associated with reports, and so on.

End point

  1. Listing: {base_url}/intel/queries/indicators/v1

  2. Details: {base_url}/intel/entities/indicators/GET/v1

Check the {base_url} in the config parameters details for further information.

Accepted Authentication Methods

Authentication method

Details

user/pass

You will need your client_id_value, which acts as a user, and secret_key_value, which acts as a password, to connect to the API and execute the API request.

Info

Treat Your Secret Key Like A Password

The security of your application is tied to the security of your secret key. Secure it as you would any sensitive credential. Don't share it with unauthorized individuals or email it to anyone under any circumstances.

Vendor setup

In order to configure the Devo | CrowdStrike API Resources collector, you need to create an API client that will be used to authenticate API requests.

...

  1. Finally, copy the Client ID and Client Secret shown on the next screen. You will need these values to configure the collector.

Run the collector

Once the data source is configured, you can either send us the required information if you want us to host and manage the collector for you (Cloud collector), or deploy and host the collector in your own machine using a Docker image (On-premise collector).

Rw ui tabs macro
Rw tab
titleCloud collector

We use a piece of software called Collector Server to host and manage all our available collectors.

To enable the collector for a customer:

  1. In the Collector Server GUI, access the domain in which you want this instance to be created

  2. Click Add Collector and find the one you wish to add.

  3. In the Version field, select the latest value.

  4. In the Collector Name field, set the value you prefer (this name must be unique inside the same Collector Server domain).

  5. In the sending method select Direct Send. Direct Send configuration is optional for collectors that create Table events, but mandatory for those that create Lookups.

  6. In the Parameters section, establish the Collector Parameters as follows below:

Editing the JSON configuration

Code Block
{
  "global_overrides": {
    "debug": false
  },
  "inputs": {
    "crowdstrike": {
      "id": "<short_unique_id>",
      "enabled": true,
      "override_base_url": "<override_base_url_value>",
      "credentials": {
        "client_id": "<client_id_value>",
        "secret_key": "<secret_key_value>"
      },
      "services": {
        "incidents": {
          "request_period_in_seconds": "<request_period_in_seconds_value>",
          "start_timestamp_in_epoch_seconds": "<start_timestamp_in_epoch_seconds_value>"
        },
        "hosts": {
          "request_period_in_seconds": "<request_period_in_seconds_value>",
          "start_timestamp_in_epoch_seconds": "<start_timestamp_in_epoch_seconds_value>"
        },
        "vulnerabilities": {
          "request_period_in_seconds": "<request_period_in_seconds_value>",
          "start_timestamp_in_epoch_seconds": "<start_timestamp_in_epoch_seconds_value>"
        },
        "behaviors": {
          "request_period_in_seconds": "<request_period_in_seconds_value>",
          "start_timestamp_in_epoch_seconds": "<start_timestamp_in_epoch_seconds_value>"
        },
        "filevantage": {
          "request_period_in_seconds": "<request_period_in_seconds_value>",
          "start_timestamp_in_epoch_seconds": "<start_timestamp_in_epoch_seconds_value>"
        },
        "alerts": {
          "request_period_in_seconds": "<request_period_in_seconds_value>",
          "start_timestamp_in_epoch_seconds": "<start_timestamp_in_epoch_seconds_value>"
        },
        "indicators": {
          "start_timestamp_in_epoch_seconds": "<start_timestamp_in_epoch_seconds_value>"
        },
        "estream": {
          "request_period_in_seconds": "<request_period_in_seconds_value>",
          "reset_persistence_auth": "<reset_persistence_auth_value>",
          "override_offset_save_batch_size_in_events": "<override_offset_save_batch_size_in_events_value>",
          "override_max_seconds_after_last_ingestion": "<override_max_seconds_after_last_ingestion_value>",
          "initial_partition_offsets": {
            "<partition_id_value>": "<partition_offset_value>"
          },
          "tagging_version": "<tagging_version_value>",
          "additional_tag_mappings": {
            "<lowercased_event_type_value>": "<fourth_tag_level_value>"
          }
        }
      }
    }
  }
}

Replace the placeholders with the required values:

Parameter

Data Type

Requirement

Value Range / Format

Description

short_unique_id

int

Mandatory

Min length: 1

Use this param to give an unique id to this input service.

override_base_url_value

str

Optional

Min length: 1

By default, the base URL is https://api.crowdstrike.com. This parameter allows you to customize the base URL.

Info

This parameter should be removed if it is not used.

client_id_value

str

Mandatory

Min length: 1

User Client ID to authenticate to the service.

secret_key_value

str

Mandatory

Min length: 1

User Secret Key to authenticate to the service.

request_period_in_seconds_value

int

Optional

Must be > 0

By default, this service will run every 600 seconds. This parameter allows you to customize this behavior.the service section of the user config.

Info

This parameter should be removed if it is not used.

start_timestamp_in_epoch_seconds_value

int

Mandatory

Format: Unix timestamps
Minimum value: 1609455600
Maximum value: Now()

Initial time period used when fetching data from the endpoint.

Info

Updating this value will produce the loss of all persisted data and current pipelines.

reset_persistence_auth_value

str

Optional

Format: YYYY-MM-DDTHH:mm:ss.SSSZ

Maximum value: current date

This parameter allows you to clear the persistence of the collector and restart the download pipeline. Updating this value will produce the loss of all persisted data and current pipelines.

Info

This parameter should be removed if it is not used.

override_offset_save_batch_size_in_events_value

int

Optional

Minimum value: 1
Maximum value: 1000

Although the stream services uses a streaming API (events are fetched continuously one by one), we send the collected events in batches for better performance. This parameter controls the amount of items to be sent per batch. The default value is 10.

Info

This parameter should be removed if it is not used.

override_max_seconds_after_last_ingestion_value

int

Optional

Minimum value: 1
Maximum value: 1000

If the collector did not ingest a batch of events in the last n seconds, the connection will be closed and all the streams will be restarted. This parameter configures this time span.

Info

This parameter should be removed if it is not used.

partition_offset_value

object

Optional

It has the following structure:

"initial_partition_offsets": {"<partition_id_value>": "<partition_offset_value>"}

Where:

  • <partition_id_value>: The partition ID (0, 1, 2…) that will use this initial offset.

  • <partition_offset_value>: The initial offset. This offset will not be included in the ingestion (it will start from the next offset).

The CrowdStrike Events Stream has partitions, and each one streams its events, hence managing its event offset. You can specify an initial offset to start receiving events from when querying for events. This parameter allows you to define initial offsets for the initial run of this service or when the state is being reset.

Info

This parameter should be removed if it is not used.

tagging_version_value

str

Optional

A version string (like "1.3.0") or "latest".

This parameter configures the tagging mechanism that every release might introduce.

  1. If you want to keep the original tagging mechanism, remove this parameter.

  2. If you want to use a specific mechanism created for a certain release, set your desired version.

  3. If you want to always have the latest tagging mechanism without having backwards compatibility, use latest.

Info

This parameter should be removed if it is not used.

additional_tag_mappings_value

object

Optional

It has the following structure:

"additional_tag_mappings": {"<lowercased_event_type_value>": "<fourth_tag_level_value>"}

Where:

  • <lowercased_event_type_value>: Every event's metadata.eventType (lowercased) JSON property.

  • <fourth_tag_level_value>: The fourth level for the edr.crowdstrike.falconstreaming.{value} tag.

In case you want to have a custom destination tag for certain events that is not covered by default, you can set it up using this parameter.

Info

This parameter should be removed if it is not used.

Rw tab
titleOn-premise collector

This data collector can be run in any machine that has the Docker service available because it should be executed as a docker container. The following sections explain how to prepare all the required setup for having the data collector running.

Structure

The following directory structure should be created for being used when running this collector:

Code Block
<any_directory>
└── devo-collectors/
    └── devo-collector-crowdstrikeapi/
        ├── certs/
        │   ├── chain.crt
        │   ├── <your_domain>.key
        │   └── <your_domain>.crt
        └── config/ 
            └── config-crowdstrikeapi.yaml

Devo credentials

In Devo, go to Administration → Credentials → X.509 Certificates, download the Certificate, Private key and Chain CA and save them in devo-collectors-crowdstrikeapi/certs/. Learn more about security credentials in Devo here.

Editing the config.yaml file

Code Block
globals:
  debug: <debug_value>
  id: not_used
  name: <collector_name>
  persistence:
    type: filesystem
    config:
      directory_name: state

outputs:
  devo_1:
    type: devo_platform
    config:
      address: <devo_address>
      port: 443
      type: SSL
      chain: <chain_filename>
      cert: <cert_filename>
      key: <key_filename>
inputs:
  crowdstrike:
    id: <input_id>
    enabled: true
    override_base_url: <override_base_url_value>
    credentials:
      client_id: <client_id_value>
      secret_key: <secret_key_value>
    services:
      incidents:
        request_period_in_seconds: <request_period_in_seconds_value>
        start_timestamp_in_epoch_seconds: <start_timestamp_in_epoch_seconds_value>
      hosts:
        request_period_in_seconds: <request_period_in_seconds_value>
        start_timestamp_in_epoch_seconds: <start_timestamp_in_epoch_seconds_value>
      vulnerabilities:
        request_period_in_seconds: <request_period_in_seconds_value>
        start_timestamp_in_epoch_seconds: <start_timestamp_in_epoch_seconds_value>
      behaviors:
        request_period_in_seconds: <request_period_in_seconds_value>
        start_timestamp_in_epoch_seconds: <start_timestamp_in_epoch_seconds_value>
      filevantage:
        request_period_in_seconds: <request_period_in_seconds_value>
        start_timestamp_in_epoch_seconds: <start_timestamp_in_epoch_seconds_value>
      alerts:
        request_period_in_seconds: <request_period_in_seconds_value>
        start_timestamp_in_epoch_seconds: <start_timestamp_in_epoch_seconds_value>
      indicators:
        request_period_in_seconds: <request_period_in_seconds_value>
        start_timestamp_in_epoch_seconds: <start_timestamp_in_epoch_seconds_value>
      estream:
        request_period_in_seconds: <request_period_in_seconds_value>
        reset_persistence_auth: <reset_persistence_auth_value>
        overide_offset_save_batch_size_in_events: <overide_offset_save_batch_size_in_events_value>
        overide_max_seconds_after_last_ingestion: <overide_max_seconds_after_last_ingestion_value>
        initial_partition_offsets:
          <partition_id_value>: <partition_offset_value>
        tagging_version: <tagging_version_value>
        additional_tag_mappings:
          <lowercased_event_type_value>: <fourth_tag_level_value>

Replace the placeholders with the required values:

Parameter

Data Type

Requirement

Value Range / Format

Description

short_unique_id

int

Mandatory

Min length: 1

Use this param to give an unique id to this input service.

override_base_url_value

str

Optional

Min length: 1

By default, the base URL is https://api.crowdstrike.com. This parameter allows you to customize the base URL.

Info

This parameter should be removed if it is not used.

client_id_value

str

Mandatory

Min length: 1

User Client ID to authenticate to the service.

secret_key_value

str

Mandatory

Min length: 1

User Secret Key to authenticate to the service.

request_period_in_seconds_value

int

Optional

Must be > 0

By default, this service will run every 600 seconds. This parameter allows you to customize this behavior.the service section of the user config.

Info

This parameter should be removed if it is not used.

start_timestamp_in_epoch_seconds_value

int

Mandatory

Format: Unix timestamps
Minimum value: 1609455600
Maximum value: Now()

Initial time period used when fetching data from the endpoint.

Info

Updating this value will produce the loss of all persisted data and current pipelines.

reset_persistence_auth_value

str

Optional

Format: YYYY-MM-DDTHH:mm:ss.SSSZ

Maximum value: current date

This parameter allows you to clear the persistence of the collector and restart the download pipeline. Updating this value will produce the loss of all persisted data and current pipelines.

Info

This parameter should be removed if it is not used.

override_offset_save_batch_size_in_events_value

int

Optional

Minimum value: 1
Maximum value: 1000

Although the stream services uses a streaming API (events are fetched continuously one by one), we send the collected events in batches for better performance. This parameter controls the amount of items to be sent per batch. The default value is 10.

Info

This parameter should be removed if it is not used.

override_max_seconds_after_last_ingestion_value

int

Optional

Minimum value: 1
Maximum value: 1000

If the collector did not ingest a batch of events in the last n seconds, the connection will be closed and all the streams will be restarted. This parameter configures this time span.

Info

This parameter should be removed if it is not used.

partition_offset_value

object

Optional

It has the following structure:
initial_partition_offsets: <partition_id_value>:<partition_offset_value>

Where:

  • <partition_id_value>: The partition ID (0, 1, 2…) that will use this initial offset.

  • <partition_offset_value>: The initial offset. This offset will not be included in the ingestion (it will start from the next offset).

The CrowdStrike Events Stream has partitions, and each one streams its events, hence managing its event offset. You can specify an initial offset to start receiving events from when querying for events. This parameter allows you to define initial offsets for the initial run of this service or when the state is being reset.

Info

This parameter should be removed if it is not used.

tagging_version_value

str

Optional

A version string (like "1.3.0") or "latest".

This parameter configures the tagging mechanism that every release might introduce.

  1. If you want to keep the original tagging mechanism, remove this parameter.

  2. If you want to use a specific mechanism created for a certain release, set your desired version.

  3. If you want to always have the latest tagging mechanism without having backwards compatibility, use latest.

Info

This parameter should be removed if it is not used.

additional_tag_mappings_value

object

Optional

It has the following structure:

additional_tag_mappings:
<lowercased_event_type_value>: <fourth_tag_level_value>

Where:

  • <lowercased_event_type_value>: Every event's metadata.eventType (lowercased) JSON property.

  • <fourth_tag_level_value>: The fourth level for the edr.crowdstrike.falconstreaming.{value} tag.

In case you want to have a custom destination tag for certain events that is not covered by default, you can set it up using this parameter.

Info

This parameter should be removed if it is not used.

Download the Docker image

The collector should be deployed as a Docker container. Download the Docker image of the collector as a .tgz file by clicking the link in the following table:

Collector Docker image

SHA-256 hash

collector-crowdstrike_api_resources_if-docker-image-1.12.0

99ad930ac2377f54d333965b9f49f767b2e6107b27113e2dc8f387fc7c9f6095

Use the following command to add the Docker image to the system:

Code Block
gunzip -c <image_file>-<version>.tgz | docker load
Note

Once the Docker image is imported, it will show the real name of the Docker image (including version info). Replace <product_name>, <image_name> and <version> with the proper values.

The Docker image can be deployed on the following services:

Docker

Execute the following command on the root directory <any_directory>/devo-collectors/<product_name>/

Code Block
docker run 
--name collector-<product_name> 
--volume $PWD/certs:/devo-collector/certs 
--volume $PWD/config:/devo-collector/config 
--volume $PWD/state:/devo-collector/state 
--env CONFIG_FILE=config.yaml 
--rm 
--interactive 
--tty 
<image_name>:<version>
Note

Replace <product_name>, <image_name> and <version> with the proper values.

Docker Compose

The following Docker Compose file can be used to execute the Docker container. It must be created in the <any_directory>/devo-collectors/<product_name>/ directory.

Code Block
version: '3'
services:
  collector-<product_name>:
    image: <image_name>:${IMAGE_VERSION:-latest}
    container_name: collector-<product_name>
    volumes:
      - ./certs:/devo-collector/certs
      - ./config:/devo-collector/config
      - ./credentials:/devo-collector/credentials
      - ./state:/devo-collector/state
    environment:
      - CONFIG_FILE=${CONFIG_FILE:-config.yaml}

To run the container using docker-compose, execute the following command from the <any_directory>/devo-collectors/<product_name>/ directory:

Code Block
IMAGE_VERSION=<version> docker-compose up -d
Note

Replace <product_name>, <image_name> and <version> with the proper values.

Copy

Collector services detail

This section is intended to explain how to proceed with specific actions for services.

...

Expand
titleRestart the persistence

This collector uses persistent storage to download events in an orderly fashion and avoid duplicates. In case you want to re-ingest historical data or recreate the persistence, you can restart the persistence of this collector by following these steps:

  1. Edit the configuration file.

  2. Change the value of the reset_persistence_auth_value to a different one.

  3. Save the changes.

  4. Restart the collector.

The collector will detect this change and will restart the persistence using the parameters of the configuration file.

Troubleshooting

Expand
titleTroubleshooting

This collector has different security layers that detect both an invalid configuration and abnormal operation. This table will help you detect and resolve the most common errors.

Configuration errors

Error Type

Error Id

Error Message

Cause

Solution

InitVariablesError

1-2

Invalid content detected in the configuration

The module_properties setting does not have the right format.

Check the documentation and update the configuration accordingly

InitVariablesError

3-5

Invalid content detected in the configuration

The base_url setting does not have the right format.

Check the documentation and update the configuration accordingly

InitVariablesError

6-7

Invalid content detected in the configuration

The override_base_url setting does not have the right format.

Check the documentation and update the configuration accordingly

InitVariablesError

8-9

Invalid content detected in the configuration

The base_tag setting does not have the right format.

Check the documentation and update the configuration accordingly

InitVariablesError

10-11

Invalid content detected in the configuration

The user_agent setting does not have the right format.

Check the documentation and update the configuration accordingly

InitVariablesError

12-13

Invalid content detected in the configuration

The endpoint setting does not have the right format.

Check the documentation and update the configuration accordingly

InitVariablesError

14-15

Invalid content detected in the configuration

The auth setting does not have the right format.

Check the documentation and update the configuration accordingly

InitVariablesError

16-17

Invalid content detected in the configuration

The event_list setting does not have the right format.

Check the documentation and update the configuration accordingly

InitVariablesError

18-19

Invalid content detected in the configuration

The details settings need to have the right format.

Check the documentation and update the configuration accordingly

InitVariablesError

20-22

Invalid content detected in the configuration

The logs_limit_in_items setting does not have the right format.

Check the documentation and update the configuration accordingly

InitVariablesError

23-24

Invalid content detected in the configuration

The credentials setting does not have the right format.

Check the documentation and update the configuration accordingly

InitVariablesError

25-26

Invalid content detected in the configuration

The client_id setting does not have the right format.

Check the documentation and update the configuration accordingly

InitVariablesError

27-28

Invalid content detected in the configuration

The secret_key setting does not have the right format.

Check the documentation and update the configuration accordingly

InitVariablesError

29-31

Invalid content detected in the configuration

The start_timestamp_in_epoch_seconds setting does not have the right format.

Check the documentation and update the configuration accordingly

InitVariablesError

32-33

Invalid content detected in the configuration

The unique_identifier setting does not have the right format.

Check the documentation and update the configuration accordingly

SetupError

100

Required credentials are invalid

Required credentials are invalid

Include the proper credentials in the configuration

SetupError

101

Service not found

A declared service is not valid

Include the proper service name in the configuration

SetupError

102-103

The token has no access

The generated token cannot access a service list.

Enable the service in the Crowdstrike configuration

SetupError

104-105

The token has no access

The generated token cannot access service details.

Enable the service in the Crowdstrike configuration

Runtime errors

Error Type

Error Id

Error Message

Cause

Solution

PrePullError

200

Error before pulling data

The start time is is newer than the current date

Update the configuration

PullError

300-312

Error pulling data

Error pulling data from the service

Review the error and act accordingly if required.

ApiError

400-403

API error

The API returned an error

Review the error and act accordingly if required.

Collector operations

This section is intended to explain how to proceed with specific operations of this collector.

Expand
titleOperations to verify collector

Initialization

The initialization module is in charge of setup and running the input (pulling logic) and output (delivering logic) services and validating the given configuration. A successful run has the following output messages for the initializer module:

Code Block
2023-01-10T15:22:57.146 INFO MainProcess::MainThread -> Loading configuration using the following files: {"full_config": "config-test-local.yaml", "job_config_loc": null, "collector_config_loc": null}
2023-01-10T15:22:57.146 INFO MainProcess::MainThread -> Using the default location for "job_config_loc" file: "/etc/devo/job/job_config.json"
2023-01-10T15:22:57.147 INFO MainProcess::MainThread -> "\etc\devo\job" does not exists
2023-01-10T15:22:57.147 INFO MainProcess::MainThread -> Using the default location for "collector_config_loc" file: "/etc/devo/collector/collector_config.json"
2023-01-10T15:22:57.148 INFO MainProcess::MainThread -> "\etc\devo\collector" does not exists
2023-01-10T15:22:57.148 INFO MainProcess::MainThread -> Results of validation of config files parameters: {"config": "C:\git\collectors2\devo-collector-<name>\config\config.yaml", "config_validated": True, "job_config_loc": "/etc/devo/job/job_config.json", "job_config_loc_default": True, "job_config_loc_validated": False, "collector_config_loc": "/etc/devo/collector/collector_config.json", "collector_config_loc_default": True, "collector_config_loc_validated": False}
2023-01-10T15:22:57.171 WARNING MainProcess::MainThread -> [WARNING] Illegal global setting has been ignored -> multiprocessing: False

Events delivery and Devo ingestion

The event delivery module is in charge of receiving the events from the internal queues where all events are injected by the pullers and delivering them using the selected compatible delivery method. A successful run has the following output messages for the initializer module:

Code Block
2023-01-10T15:23:00.788    INFO OutputProcess::MainThread -> DevoSender(standard_senders,devo_sender_0) -> Starting thread
2023-01-10T15:23:00.789    INFO OutputProcess::MainThread -> DevoSenderManagerMonitor(standard_senders,devo_1) -> Starting thread (every 300 seconds)
2023-01-10T15:23:00.790    INFO OutputProcess::MainThread -> DevoSenderManager(standard_senders,manager,devo_1) -> Starting thread
2023-01-10T15:23:00.842    INFO OutputProcess::MainThread -> global_status: {"output_process": {"process_id": 18804, "process_status": "running", "thread_counter": 21, "thread_names": ["MainThread", "pydevd.Writer", "pydevd.Reader", "pydevd.CommandThread", "pydevd.CheckAliveThread", "DevoSender(standard_senders,devo_sender_0)", "DevoSenderManagerMonitor(standard_senders,devo_1)", "DevoSenderManager(standard_senders,manager,devo_1)", "OutputStandardConsumer(standard_senders_consumer_0)",

Sender services

The Integrations Factory Collector SDK has 3 different sender services depending on the event type to deliver (internal, standard, and lookup). This collector uses the following Sender Services:

Logging trace

Description

Number of available senders: 1

Displays the number of concurrent senders available for the given Sender Service.

Sender manager internal queue size: 0

Displays the items available in the internal sender queue.

This value helps detect bottlenecks and needs to increase the performance of data delivery to Devo. This last can be made by increasing the concurrent senders.

Total number of messages sent: 44, messages sent since "2022-06-28 10:39:22.511671+00:00": 21 (elapsed 0.007 seconds)

Displays the number of events from the last time the collector executed the pull logic. Following the given example, the following conclusions can be obtained:

  • 44 events were sent to Devo since the collector started.

  • The last checkpoint timestamp was 2022-06-28 10:39:22.511671+00:00.

  • 21 events were sent to Devo between the last UTC checkpoint and now.

  • Those 21 events required 0.007 seconds to be delivered.

    By default these traces will be shown every 10 minutes.

Sender statistics

Each service displays its own performance statistics that allow checking how many events have been delivered to Devo by type:

Logging trace

Description

Number of available senders: 1

Displays the number of concurrent senders available for the given Sender Service

Sender manager internal queue size: 0

Displays the items available in the internal sender queue.

Standard - Total number of messages sent: 57, messages sent since "2023-01-10 16:09:16.116750+00:00": 0 (elapsed 0.000 seconds

Displays the number of events from the last time the collector executed the pull logic. Following the given example, the following conclusions can be obtained:

  • 44 events were sent to Devo since the collector started.

  • The last checkpoint timestamp was 2023-01-10 16:09:16.116750+00:00.

  • 21 events were sent to Devo between the last UTC checkpoint and now.

  • Those 21 events required 0.00 seconds to be delivered.

Expand
titleCheck memory usage

To check the memory usage of this collector, look for the following log records in the collector which are displayed every 5 minutes by default, always after running the memory-free process.

  • The used memory is displayed by running processes and the sum of both values will give the total used memory for the collector.

  • The global pressure of the available memory is displayed in the global value.

  • All metrics (Global, RSS, VMS) include the value before freeing and after previous -> after freeing memory

Code Block
  INFO InputProcess::MainThread -> [GC] global: 20.4% -> 20.4%, process: RSS(34.50MiB -> 34.08MiB), VMS(410.52MiB ->
  410.02MiB)
  INFO OutputProcess::MainThread -> [GC] global: 20.4% -> 20.4%, process: RSS(28.41MiB -> 28.41MiB), VMS(705.28MiB ->
  705.28MiB)

Change log

Release

Released on

Release type

Recommendations

v1.12.0

Status
colourPurple
titleNew FEATUREs

Expand
titleDetails

Feature

  • Added indicators service.

Recommended version

v1.11.0

Status
colourPurple
titleNew FEATUREs

Status
colourBlue
titleIMPROVEMENTS

Upgrade

Expand
titleDetails

Feature

  • Added EPP Detection Summary events as a default service.

Improvements

  • Updated DCSDK

v1.10.0

Status
colourPurple
titleNew FEATUREs

Upgrade

Expand
titleDetails

Feature

  • Added new service Alerts.

v1.9.1

Status
colourBlue
titleIMPROVEMENTS

Upgrade

Expand
titleDetails

Improvements

  • Solved CVE-2024-45490, CVE-2024-45491, CVE-2024-45492 by updating docker base image version to 1.3.1.

v1.9.0

Status
colourBlue
titleIMPROVEMENTS

Upgrade

Expand
titleDetails

Improvements

  • Updated DCSDK from 1.12.2 to 1.12.4

    • Change internal queue management for protecting against OOMK

    • Extracted ModuleThread structure from PullerAbstract

    • Improve Controlled stop when both processes fails to instantiate

    • Improve Controlled stop when InputProcess is killed

    • Fixed error related a ValueError exception not well controlled.

    • Fixed error related with loss of some values in internal messages

v1.8.0

Status
colourBlue
titleIMPROVEMENTS

Status
colourYellow
titleBUG FIXING

Upgrade

Expand
titleDetails

Improvements

  • Updated DCSDK from 1.11.1 to 1.12.2- Updated the DCSDK base image to 1.3.0.

Bug fixing

  • Fixed duplicated logs in event services.

v1.7.0

Status
colourBlue
titleIMPROVEMENTS

Status
colourYellow
titleBUG FIXING

Upgrade

Expand
titleDetails

Improvements

  • Add compatibility when reading configuration to accept older parameters.

Bug fixing

  • Fix a bug when getting the estream listing and improve the log message.

v1.6.0

Status
colourBlue
titleIMPROVEMENTS

Upgrade

Expand
titleDetails

Improvements

  • Updated to DCSDK 1.11.1

    • Added extra check for not valid message timestamps

    •    Added extra check for improve the controlled stop

    •    Changed default number for connection retries (now 7)

    •    Fix for Devo connection retries

    • Updated DevoSDK to v5.1.9

    • Fixed some bug related to development on MacOS

    • Added an extra validation and fix when the DCSDK receives a wrong timestamp format

    • Added an optional config property for use the Syslog timestamp format in a strict way

    • Updated DevoSDK to v5.1.10

    • Fix for SyslogSender related to UTF-8

    • Enhance of troubleshooting. Trace Standardization, Some traces has been introduced.

    • Introduced a mechanism to detect "Out of Memory killer" situation

v1.4.3

Status
colourBlue
titleIMPROVEMENTS

Upgrade

Expand
titleDetails

Improvements:

  • New functionality, access to File Vantage API

  • Updated DCSDK from 1.8.0 to 1.10.2:

    •   Upgrade internal dependencies

    •   Store lookup instances into DevoSender to avoid creation of new instances for the same lookup

    •   Ensure service_config is a dict into templates

    •   Ensure special characters are properly sent to the platform

    •   Changed log level to some messages from info to debug

    • Changed some wrong log messages

    •   Upgraded some internal dependencies

    •   Changed queue passed to setup instance constructor

    • Added input metrics

    • Modified output metrics

    • Updated DevoSDK to version 5.1.6

    • Standardized exception messages for traceability

    • Added more detail in queue statistics

    • Updated PythonSDK to version 5.0.7

    • Introduced pyproject.toml

    • Added requirements.dev.txt

    • Fixed error in pyproject.toml related to project scripts endpoint

v1.4.2

Status
colourBlue
titleIMPROVEMENTS

Upgrade

Expand
titleDetails

Improvements:

  • Updated DCSDK from 1.7.2 to 1.8.0:

    • Ability to validate collector setup and exit without pulling any data.

    • Ability to store in the persistence the messages that couldn't be sent after the collector stopped.

    • Ability to send messages from the persistence when the collector starts and before the puller begins working.

    • Ensure special characters are properly sent to the platform.

v1.4.0

Status
colourBlue
titleIMPROVEMENTS

Status
colourYellow
titleBUG FIXING

Upgrade

Expand
titleDetails

Improvements:

  • Added @devo_pulling_id field.

  • Update the `details` endpoint to use the v2 API (due to v1 deprecation)

Bug Fixing:

  • Fixed a bug that prevented overriding the base URL.

v1.3.1

Status
colourBlue
titleIMPROVEMENTS

Upgrade

Expand
titleDetails

Improvements:

  • The RegEx validation has been updated to enforce the HTTP[S] protocol for all services when this parameter is filled in by the user.

  • The Event Stream (eStream) service has been updated to use the same overriding parameter for the base_url than the other previous services. This allows to the user define this only one time for all available services through override_base_url user config file.

v1.3.0

Status
colourBlue
titleIMPROVEMENTS

Status
colourPurple
titleNew FEATUREs

Upgrade

Expand
titleDetails

Improvements:

  • Upgraded underlay IFC SDK v1.3.0 to v1.4.0.

  • Updated the underlying DevoSDK package to v3.6.4 and dependencies, this upgrade increases the resilience of the collector when the connection with Devo or the Syslog server is lost. The collector is able to reconnect in some scenarios without running the self-kill feature.

  • Support for stopping the collector when a GRACEFULL_SHUTDOWN system signal is received.

  • Re-enabled the logging to devo.collector.out for Input threads.

  • Improved self-kill functionality behavior.

  • Added more details in log traces.

  • Added log traces for knowing system memory usage.

New Features:

  • CrowdStrike Event Stream (eStream) data source is now available. This service leverages the CrowdStrike Falcon Event Streams API to obtain the customer’s DataFeed URLs and continuosly fetch events that will be ingested under the edr.crowdstrike.falconstreaming.* family of tables. For more information, check the CrowdStrike’s official documentation.

v1.2.0

Status
colourBlue
titleIMPROVEMENTS

Upgrade

Expand
titleDetails

Improvements:

  • Upgraded underlay IFC SDK v1.1.3 to v1.3.0.

  • The resilience has been improved with a new feature that restart the collector when the Devo connections is lost and it cannot be recovered.

  • When an exception is raised by the Collector Setup, the collector retries after 5 seconds. For consecutive exceptions, the waiting time is multiplied by 5 until hits 1800 seconds, which is the maximum waiting time allowed. No maximum retries are applied.

  • When an exception is raised by the Collector Pull method, the collector retries after 5 seconds. For consecutive exceptions, the waiting time is multiplied by 5 until hits 1800 seconds, which is the maximum waiting time allowed. No maximum retries are applied.

  • When an exception is raised by the Collector pre-pull method, the collector retries after 30 seconds. No maximum retries are applied.

v1.1.0

Status
colourBlue
titleIMPROVEMENTS

Status
colourYellow
titleVULNS

Upgrade

Expand
titleDetails

Improvements:

  • The underlay IFC SDK has been updated from v1.1.2 to v1.1.3.

  • The resilience has been improved with a new feature that restart the collector when the Devo connections is lost and it cannot be recovered.

Vulnerabilities mitigation:

  • All critical and high vulnerabilities have been mitigated.

v1.0.0

Status
colourPurple
titleNew FEATUREs

-

Expand
titleDetails

New Features:

  • Initial release that includes the following data sources from CrowdStrike API:

    • Hosts

    • Incidents

    • Vulnerabilities

    • Behaviors

Version migration

Expand
titleUpgrading

Upgrade from

Compatible

Impact

Steps

>=1.4.0

Yes

No known impact

  • Pause the collector.

  • Update the image version.

  • Resume the collector.

1.3.1

Yes

No known impact

  • Pause the collector.

  • Update the image version.

  • Resume the collector.

...