Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Crowdstrike is one of the top data sources for Devo customers and prospects alike, so would encourage new customers to use this one, and existing ones to transition to this one soon.

Data source description

Data

Source

source

Subtype

Table

Service

End Point

Endpoint

Description

Available from release

Hosts

-

edr.crowdstrike.falconstreaming.agents

hosts

  1. Listing: {base_url}/devices/queries/devices/v1

  2. Details: {base_url}/devices/entities/devices/v2

Check the {base_url} in the config parameters details for further information.

Hosts are endpoints that run the Falcon sensor. You can get information and details about these agents.

Info

Reference documentation:

v1.0.0

Incidents

-

edr.crowdstrike.falconstreaming.incidents

incidents

  1. Listing: {base_url}/incidents/queries/incidents/v1

  2. Details: {base_url}/incidents/entities/incidents/GET/v1

Check the {base_url} in the config parameters details for further information.

Incidents are events that occur in an organization which can represent a cybersecurity threat or an attack.

Info

Reference documentation:

v1.0.0

Spotlight

Vulnerabilities

-

edr.crowdstrike.falconstreaming.vulnerabilities

alias:

edr.crowdstrike.falcon_spotlight.vulnerabilities

vulnerabilities

  1. Listing: {base_url}/spotlight/queries/vulnerabilities/v1

  2. Details: {base_url}/spotlight/entities/vulnerabilities/v2

Check the {base_url} in the config parameters details for further information.

Vulnerabilities are known security risks in an operating system, application, hardware, firmware, or other part of a computing stack.

Info

Reference documentation:

v1.0.0

Behaviors

-

edr.crowdstrike.falconstreaming.behaviors

 

behaviors

  1. Listing: {base_url}/incidents/queries/behaviors/v1

  2. Details: {base_url}/incidents/entities/behaviors/GET/v1

Check the {base_url} in the config parameters details for further information.

Behaviors are patterns of data transmissions in a network that are out of the norm, used to detect anomalies before cyber attacks occur.

Info

Reference documentation:

v1.0.0

File Vantage

 

edr.crowdstrike.falcon_filevantage.change

filevantage

  1. Listing: {base_url}/filevantage/queries/changes/v2

  2. Details: {base_url}/filevantage/entities/changes/v21

Check the {base_url} in the config parameters details for further information.

Collect data about changes to files, folders, and registries with Falcon FileVantage APIs. Store this data to help you meet certain compliance recommendations and requirements as listed in the Sarbanes–Oxley Act, National Institute for Standards and Technology (NIST), Health Insurance Portability and Accountability Act (HIPAA), and others.

Info

Reference documentation:

 

Event Stream (eStream)

AuthActivity AuditEvent

edr.crowdstrike.falconstreaming.auth_activity

estream

The endpoints are dynamically generated by following this (simplified) approach:

  1. Once an authentication token has been obtained, a request to {base_url}/sensors/entities/datafeed/v2 is performed to obtain the “Data Feeds”.

    1. Check the {base_url} in the config parameters details for further information.

  2. Each Data Feed will contain a URL and a session token. A request to each of these URLs (along with their corresponding token) will return a streaming response in which every non-empty line represents a different event.

    1. Every Data Feed will also contain a “refresh stream” URL, which is accessed every less than 30 minutes.

    2. All the Data Feeds are processed in parallel. The amount of available Data Feeds depend on the CrowdStrike account’s configuration.

The Streaming API provides several types of events.

Info

Some of them are documented in https://developer.crowdstrike.com/crowdstrike/docs/streaming-api-events .

v1.3.0

IncidentSummaryEvent

edr.crowdstrike.falconstreaming.incident_summary

v1.3.0

RemoteResponseSessionStartEvent RemoteResponseSessionEndEvent

edr.crowdstrike.falconstreaming.remote_response_session

v1.3.0

CustomerIOCEvent

edr.crowdstrike.falconstreaming.customer_ioc

v1.3.0

Event_ExternalAPIEvent

edr.crowdstrike.falconstreaming.external_api

v1.3.0

DetectionSummaryEvent

edr.crowdstrike.falconstreaming.detection_summary

v1.3.0

UserActivityAuditEvent

Depending on the event’s event.ServiceName property (in lowercase):

  • groupsedr.crowdstrike.falconstreaming.user_activity_groups

  • devicesedr.crowdstrike.falconstreaming.user_activity_devices

  • detectionsedr.crowdstrike.falconstreaming.user_activity_detections

  • quarantined_filesedr.crowdstrike.falconstreaming.user_activity_quarantined_files

  • ip_whitelistedr.crowdstrike.falconstreaming.user_activity_ip_whitelist

  • prevention_policyedr.crowdstrike.falconstreaming.user_activity_prevention_policy

  • sensor_update_policyedr.crowdstrike.falconstreaming.user_activity_sensor_update_policy

  • device_control_policyedr.crowdstrike.falconstreaming.user_activity_device_control_policy

v1.3.0

Vendor setup

In order to configure the Devo | CrowdStrike API Resources collector, you need to create an API client that will be used to authenticate API requests.

  1. After getting your Crowdstrike CrowdStrike Falcon Cloud credentials, log into the CrowdStrike Falcon Cloud dashboard.

  2. Click the three dots in the left menu bar.

  3. Click API Clients and Keys. This will open a page to create an API client.

  4. Click Add API Client at the top right corner. Enter a CLIENT NAME and DESCRIPTION.

  5. Then, enable the API scopes for your new API client. Click the required Read permissions for each scope and click ADD to create the client.

  6. Finally, copy the Client ID and Client Secret shown on the next screen. You will need these values to configure the collector.

...

<any_directory> └── devo-collectors/ └── devo-collector-crowdstrikeapi/ ├── certs/ │ ├── chain.crt

To enable the collector for a customer:

  1. In the Collector Server GUI, access the domain in which you want this instance to be created

  2. Click Add Collector and find the one you wish to add.

  3. In the Version field, select the latest value.

  4. In the Collector Name field, set the value you prefer (this name must be unique inside the same Collector Server domain).

  5. In the sending method select Direct Send. Direct Send configuration is optional for collectors that create Table events, but mandatory for those that create Lookups.

  6. In the Parameters section, establish the Collector Parameters as follows below:

Editing the JSON configuration

Rw ui tabs macro
Rw tab
titleCloud collector

We use a piece of software called Collector Server to host and manage all our available collectors. If you want us to host this collector for you, get in touch with us and we will guide you through the configuration.

Rw tab
titleOn-premise collector

This data collector can be run in any machine that has the Docker service available because it should be executed as a docker container. The following sections explain how to prepare all the required setup for having the data collector running.

Structure

The following directory structure should be created for being used when running this collector:

Code Block
Code Block
{
  "global_overrides": {
    "debug": <debug_value>
  },
  "inputs": {
    "crowdstrike": {
      "id": "<short_unique_id>",
      "enabled": true,
   ├── <your_domain>.key  "requests_per_second": <requests_per_second_value>,
     │   └── <your_domain>.crt "override_base_url": "<override_base_url_value>",
      "credentials": {
└── config/       "client_id": "<client_id_value>",
     └── config-crowdstrikeapi.yaml

Devo credentials

In Devo, go to Administration → Credentials → X.509 Certificates, download the Certificate, Private key and Chain CA and save them in devo-collectors-crowdstrikeapi/certs/. Learn more about security credentials in Devo here.

Image Removed

Editing the config-crowdstrikeapi.yaml file

Code Block
globals:
  debug: <debug_value>
  id: not_used
  name: <collector_name>
  persistence:
    type: filesystem
    config:
      directory_name: state
  multiprocessing: false
  queue_max_size_in_mb: 1024
  queue_max_size_in_messages: 1000
  queue_max_elapsed_time_in_sec: 60
  queue_wrap_max_size_in_messages: 100

outputs:
  devo_1:
    type: devo_platform
    config:
      address: <devo_address>
      port: 443   "secret_key": "<secret_key_value>"
      },
      "services": {
        "incidents": {
          "request_period_in_seconds": <request_period_in_seconds_value>,
          "start_timestamp_in_epoch_seconds": <start_timestamp_in_epoch_seconds_value>
        },
        "hosts": {
          "request_period_in_seconds": <request_period_in_seconds_value>,
         type: SSL
 "start_timestamp_in_epoch_seconds": <start_timestamp_in_epoch_seconds_value>
     chain: <chain_filename>  },
    cert: <cert_filename>   "vulnerabilities": {
  key: <key_filename> inputs:   crowdstrike:     id: <input_id>"request_period_in_seconds": <request_period_in_seconds_value>,
    enabled: true     requests"start_per_second: <request_per_seconds>
    override_base_url: <override_base_urltimestamp_in_epoch_seconds": <start_timestamp_in_epoch_seconds_value>
    credentials:    },
  client_id: <client_id_value>       secret_key"behaviors": <secret_key_value>{
    services:       incidents:
        "request_period_in_seconds": <request_period_in_seconds_value>,
          "start_timestamp_in_epoch_seconds": <start_timestamp_in_epoch_seconds_value>
        hosts:},
        request"filevantage": {
          "request_period_in_seconds": "<request_period_in_seconds_value>",
          "start_timestamp_in_epoch_seconds": "<start_timestamp_in_epoch_seconds_value>"
        },
        vulnerabilities"estream": {
          "request_period_in_seconds": <request_period_in_seconds_value>,
        start_timestamp_in_epoch_seconds: <start_timestamp_in_epoch_seconds_value>  "reset_persistence_auth": "<reset_persistence_auth_value>",
      behaviors:         request_period"overide_offset_save_batch_size_in_secondsevents": <request_period<overide_offset_save_batch_size_in_secondsevents_value>,
        start_timestamp_in_epoch_seconds: <start_timestamp_in_epoch_seconds_value>  "overide_max_seconds_after_last_ingestion": <overide_max_seconds_after_last_ingestion_value>,
      filevantage:    "initial_partition_offsets": {
           request "<partition_period_in_secondsid_value>": <request<partition_periodoffset_in_seconds_value>
        start_timestamp_in_epoch_seconds: <start_timestamp_in_epoch_seconds_value>
 },
     estream:         request_period_in_seconds: <request_period_in_seconds_value>"tagging_version": "<tagging_version_value>",
          reset"additional_persistencetag_authmappings": <reset_persistence_auth_value> {
            overide"<lowercased_offsetevent_save_batch_size_in_events: <overide_offset_save_batch_size_in_events_value>type_value>": "<fourth_tag_level_value>"
        overide_max_seconds_after_last_ingestion: <overide_max_seconds_after_last_ingestion_value>
 }
       initial_partition_offsets: }
      }
  <partition_id_value>: <partition_offset_value> }
       tagging_version: <tagging_version_value>
        additional_tag_mappings:
          <lowercased_event_type_value>: <fourth_tag_level_value>}
}

Replace the placeholders with the required values:

Parameter

Data

Type

type

Type

Value

Range

range / Format

Details

input

<input_

id

id>

int

Mandatory

Minimum length: 1
Maximum length: 5

Use this param to give an unique id to this input service.

input

<requests_per_

status

second>

bool

requests_per_second

int

Optional

Minimum value

int

Mandatory

false / true

If the value is true, the input definition will be executed. If the value is false, the service will be ignored.

Optional

Minimum value: 1

Customize the maximum number of API requests per second. If not used, the default setting will be used: 100000 requests/sec.

info

This parameter should be removed if it is not used.

override

<override_base_url_value>

str

Optional

Valid URL following this regex:
pending

By default, the base url is https://api.crowdstrike.com. This parameter allows you to customize the base url.

Info

This parameter should be removed if it is not used.

creds

<client_id_

client

value>

str

Mandatory

Any

User Client ID to authenticate to the service.

creds

<client_secret_value>

str

Mandatory

Any

User Secret Key to authenticate to the service.

<request_period_in_seconds_value>

int

Optional

Minimum length: 1

By default, this service will run every 600 seconds. This parameter allows you to customize this behavior.

Info

This parameter should be removed if it is not used.

start

<start_timestamp_in_epoch_seconds_value>

int

Mandatory

Format: Unix timestamps
Minimum value: 1609455600
Maximum value: Now()

Initial time period used when fetching data from the endpoint.

Note

Updating this value will produce the lost of all persisted data and current pipelines.

<reset_persistence_auth_value>

str

Optional

Format: YYYY-MM-DDTHH:mm:ss.SSSZ

Maximum value: current date

This parameter allows you to clear the persistence of the collector and restart the download pipeline. Updating this value will produce the loss of all persisted data and current pipelines.

 

Info

This parameter should be removed if it is not used.

<overide_offset_save_batch_size_in_events_value>

int

Optional

Minimum value: 1
Maximum value: 1000

Although the stream services

use

uses a streaming API (events are fetched continuously one by one), we send the collected events in batches for better performance. This parameter controls the amount of items to be sent per batch. The default value is 10.

Info

This parameter should be removed if it is not used.

<overide_max_seconds_after_last_ingestion_value>

int

Optional

Minimum value: 1
Maximum value: 1000

If the collector did not ingest a batch of events in the last n seconds, the connection will be closed and all the streams will be restarted. This parameter configures this time span.

info

This parameter should be removed if it is not used.

<initial_partition_offsets_value>

object

Optional

It has the following structure:

Code Block
language
none
initial_partition_offsets:
  
<partition
<partition_id_value>: <partition_offset_value>

Where:

  • <partition_id_value>: The partition ID (0, 1, 2…) that will use this initial offset.

  • <partition_offset_value>: The initial offset. This offset will not be included in the ingestion (it will start from the next offset).

the CrowdStrike Events Stream has partitions, each one streaming its own events and hence managing its own event offset. When querying for events, you can specify an initial offset to start receiving events from. This parameter allows you to define initial offsets for the initial run of this service or when the state is being reset.

Info

This parameter should be removed if it is not used.

<tagging_version_value>

str

Optional

A version string (like "1.3.0") or "latest".

This parameter configures the tagging mechanism that every release might introduce.

  1. If you want to keep the original tagging mechanism, remove this parameter.

  2. If you want to use a specific mechanism created for a certain release, set your desired version.

  3. If you want to always have the latest tagging mechanism without having backwards compatibility, use latest.

Info

This parameter should be removed if it is not used.

<additional_tag_mappings_value>

object

Optional

It has the following structure:

Code Block
languagenone
additional_tag_mappings:

  
<lowercased_event_type_value>: <fourth_tag_level_value>

Where:

  • <lowercased_event_type_value>: Every event’s metadata.eventType (lowercased) JSON property.

  • <fourth_tag_level_value>: The fourth level for the edr.crowdstrike.falconstreaming.{value} tag.

In case you want to have a custom destination tag for certain events that is not covered by default, you can set it up using this parameter.

Info

This parameter should be removed if it is not used.

Download the Docker image

The collector should be deployed as a Docker container. Download the Docker image of the collector as a .tgz file by clicking the link in the following table:

Collector Docker image

SHA-256 hash

collector-crowdstrike_api_resources_if-docker-image-1.5.4

6633513ffcaebde1316c017510e249b81e321e28e2e660c6441649a300be6832

Use the following command to add the Docker image to the system:

Code Block
gunzip -c collector-crowdstrike-docker-image-<version>.tgz | docker load

Once the Docker image is imported, it will show the real name of the Docker image (including version info). Replace "<version>" with a proper value.

The Docker image can be deployed on the following services:

  • Docker

  • Docker Compose

Docker

Execute the following command on the root directory <any_directory>/devo-collectors/crowdstrikeapi/

Code Block
docker run \
--name collector-crowdstrikeapi\
--volume $PWD/certs:/devo-collector/certs \
--volume $PWD/config:/devo-collector/config \
--volume $PWD/state:/devo-collector/state \
--env CONFIG_FILE=config-crowdstrikeapi.yaml \
--rm -it docker.devo.internal/collector/crowdstrikeapi:<version>
Note

Replace <version> with the required value.

Docker Compose

The following Docker Compose file can be used to execute the Docker container. It must be created in the <any_directory>/devo-collectors/crowdstrikeapi/ directory.

Code Block
version: '3'
services:
  collector-crowdstrikeapi
    image: docker.devo.internal/collector/crowdstrikeapi:${IMAGE_VERSION:-latest}
    volumes:
      - ./certs:/devo-collector/certs
      - ./config:/devo-collector/config
      - ./state:/devo-collector/state
    environment:
      - CONFIG_FILE=${CONFIG_FILE:-config-crowdstrikeapi.yaml}

To run the container using docker-compose, execute the following command from the <any_directory>/devo-collectors/crowdstrikeapi/ directory:

Code Block
IMAGE_VERSION=<version> docker-compose up -d
Note

Replace <version> with the required value.

Copy

API limitations

Crowdstrike does not apply limitations as long as its use is reasonable.

Change log

...

Release

...

Released on

...

Release type

...

Details

...

Rw tab
titleOn-premise collector

This data collector can be run in any machine that has the Docker service available because it should be executed as a docker container. The following sections explain how to prepare all the required setup for having the data collector running.

Structure

The following directory structure should be created for being used when running this collector:

Code Block
<any_directory>
└── devo-collectors/
    └── devo-collector-crowdstrikeapi/
        ├── certs/
        │   ├── chain.crt
        │   ├── <your_domain>.key
        │   └── <your_domain>.crt
        └── config/ 
            └── config-crowdstrikeapi.yaml

Devo credentials

In Devo, go to Administration → Credentials → X.509 Certificates, download the Certificate, Private key and Chain CA and save them in devo-collectors-crowdstrikeapi/certs/. Learn more about security credentials in Devo here.

Image Added

Editing the config-crowdstrikeapi.yaml file

Code Block
globals:
  debug: <debug_value>
  id: not_used
  name: <collector_name>
  persistence:
    type: filesystem
    config:
      directory_name: state
  multiprocessing: false
  queue_max_size_in_mb: 1024
  queue_max_size_in_messages: 1000
  queue_max_elapsed_time_in_sec: 60
  queue_wrap_max_size_in_messages: 100

outputs:
  devo_1:
    type: devo_platform
    config:
      address: <devo_address>
      port: 443
      type: SSL
      chain: <chain_filename>
      cert: <cert_filename>
      key: <key_filename>
inputs:
  crowdstrike:
    id: <input_id>
    enabled: true
    requests_per_second: <request_per_seconds>
    override_base_url: <override_base_url_value>
    credentials:
      client_id: <client_id_value>
      secret_key: <secret_key_value>
    services:
      incidents:
        request_period_in_seconds: <request_period_in_seconds_value>
        start_timestamp_in_epoch_seconds: <start_timestamp_in_epoch_seconds_value>
      hosts:
        request_period_in_seconds: <request_period_in_seconds_value>
        start_timestamp_in_epoch_seconds: <start_timestamp_in_epoch_seconds_value>
      vulnerabilities:
        request_period_in_seconds: <request_period_in_seconds_value>
        start_timestamp_in_epoch_seconds: <start_timestamp_in_epoch_seconds_value>
      behaviors:
        request_period_in_seconds: <request_period_in_seconds_value>
        start_timestamp_in_epoch_seconds: <start_timestamp_in_epoch_seconds_value>
      filevantage:
        request_period_in_seconds: <request_period_in_seconds_value>
        start_timestamp_in_epoch_seconds: <start_timestamp_in_epoch_seconds_value>
      estream:
        request_period_in_seconds: <request_period_in_seconds_value>
        reset_persistence_auth: <reset_persistence_auth_value>
        overide_offset_save_batch_size_in_events: <overide_offset_save_batch_size_in_events_value>
        overide_max_seconds_after_last_ingestion: <overide_max_seconds_after_last_ingestion_value>
        initial_partition_offsets:
          <partition_id_value>: <partition_offset_value>
        tagging_version: <tagging_version_value>
        additional_tag_mappings:
          <lowercased_event_type_value>: <fourth_tag_level_value>

Replace the placeholders with the required values:

Parameter

Data Type

Type

Value Range

Details

input_id

int

Mandatory

Minimum length: 1
Maximum length: 5

Use this param to give an unique id to this input service.

input_status

bool

Mandatory

false / true

If the value is true, the input definition will be executed. If the value is false, the service will be ignored.

requests_per_second

int

Optional

Minimum value: 1

Customize the maximum number of API requests per second. If not used, the default setting will be used: 100000 requests/sec.

Info

This parameter should be removed if it is not used.

override_base_url

str

Optional

Valid URL following this regex:
pending

By default, the base url is https://api.crowdstrike.com. This parameter allows you to customize the base url.

Info

This parameter should be removed if it is not used.

creds_client

str

Mandatory

Any

User Client ID to authenticate to the service.

creds_secret

str

Mandatory

Any

User Secret Key to authenticate to the service.

period_in_seconds

int

Optional

Minimum length: 1

By default, this service will run every 600 seconds. This parameter allows you to customize this behavior.

Info

This parameter should be removed if it is not used.

start_timestamp_in_epoch_seconds

int

Mandatory

Format: Unix timestamps
Minimum value: 1609455600
Maximum value: Now()

Initial time period used when fetching data from the endpoint.

Note

Updating this value will produce the lost of all persisted data and current pipelines.

<reset_persistence_auth_value>

str

Optional

Format: YYYY-MM-DDTHH:mm:ss.SSSZ

Maximum value: current date

This parameter allows you to clear the persistence of the collector and restart the download pipeline. Updating this value will produce the loss of all persisted data and current pipelines.

 

Info

This parameter should be removed if it is not used.

<overide_offset_save_batch_size_in_events_value>

int

Optional

Minimum value: 1
Maximum value: 1000

Although the stream services use a streaming API (events are fetched continuously one by one), we send the collected events in batches for better performance. This parameter controls the amount of items to be sent per batch. The default value is 10.

Info

This parameter should be removed if it is not used.

<overide_max_seconds_after_last_ingestion_value>

int

Optional

Minimum value: 1
Maximum value: 1000

If the collector did not ingest a batch of events in the last n seconds, the connection will be closed and all the streams will be restarted. This parameter configures this time span.

Info

This parameter should be removed if it is not used.

<initial_partition_offsets_value>

object

Optional

It has the following structure:

Code Block
languagenone
initial_partition_offsets:
       <partition_id_value>: <partition_offset_value>

Where:

  • <partition_id_value>: The partition ID (0, 1, 2…) that will use this initial offset.

  • <partition_offset_value>: The initial offset. This offset will not be included in the ingestion (it will start from the next offset).

the CrowdStrike Events Stream has partitions, each one streaming its own events and hence managing its own event offset. When querying for events, you can specify an initial offset to start receiving events from. This parameter allows you to define initial offsets for the initial run of this service or when the state is being reset.

Info

This parameter should be removed if it is not used.

<tagging_version_value>

str

Optional

A version string (like "1.3.0") or "latest".

This parameter configures the tagging mechanism that every release might introduce.

  1. If you want to keep the original tagging mechanism, remove this parameter.

  2. If you want to use a specific mechanism created for a certain release, set your desired version.

  3. If you want to always have the latest tagging mechanism without having backwards compatibility, use latest.

Info

This parameter should be removed if it is not used.

<additional_tag_mappings_value>

object

Optional

It has the following structure:

Code Block
languagenone
additional_tag_mappings:
        <lowercased_event_type_value>: <fourth_tag_level_value>

Where:

  • <lowercased_event_type_value>: Every event’s metadata.eventType (lowercased) JSON property.

  • <fourth_tag_level_value>: The fourth level for the edr.crowdstrike.falconstreaming.{value} tag.

In case you want to have a custom destination tag for certain events that is not covered by default, you can set it up using this parameter.

Info

This parameter should be removed if it is not used.

Download the Docker image

The collector should be deployed as a Docker container. Download the Docker image of the collector as a .tgz file by clicking the link in the following table:

Collector Docker image

SHA-256 hash

collector-crowdstrike_api_resources_if-docker-image-1.9.1

2e1d52a349bf579291556a27027421cc50a1bf399a707d4e42673ec3d6a9dcdc

Use the following command to add the Docker image to the system:

Code Block
gunzip -c collector-crowdstrike-docker-image-<version>.tgz | docker load

Once the Docker image is imported, it will show the real name of the Docker image (including version info). Replace "<version>" with a proper value.

The Docker image can be deployed on the following services:

  • Docker

  • Docker Compose

Docker

Execute the following command on the root directory <any_directory>/devo-collectors/crowdstrikeapi/

Code Block
docker run \
--name collector-crowdstrikeapi\
--volume $PWD/certs:/devo-collector/certs \
--volume $PWD/config:/devo-collector/config \
--volume $PWD/state:/devo-collector/state \
--env CONFIG_FILE=config-crowdstrikeapi.yaml \
--rm -it docker.devo.internal/collector/crowdstrikeapi:<version>
Note

Replace <version> with the required value.

Docker Compose

The following Docker Compose file can be used to execute the Docker container. It must be created in the <any_directory>/devo-collectors/crowdstrikeapi/ directory.

Code Block
version: '3'
services:
  collector-crowdstrikeapi
    image: docker.devo.internal/collector/crowdstrikeapi:${IMAGE_VERSION:-latest}
    volumes:
      - ./certs:/devo-collector/certs
      - ./config:/devo-collector/config
      - ./state:/devo-collector/state
    environment:
      - CONFIG_FILE=${CONFIG_FILE:-config-crowdstrikeapi.yaml}

To run the container using docker-compose, execute the following command from the <any_directory>/devo-collectors/crowdstrikeapi/ directory:

Code Block
IMAGE_VERSION=<version> docker-compose up -d
Note

Replace <version> with the required value.

Copy

API limitations

CrowdStrike does not apply limitations as long as its use is reasonable.

Change log

Release

Released on

Release type

Details

Recommendations

v1.9.1

Status
colourGreen
titleIMPROVEMENTS

Improvements

  • Solved CVE-2024-45490, CVE-2024-45491, CVE-2024-45492 by updating docker base image version to 1.3.1.

Recommended version

v1.9.0

Status
colourGreen
titleIMPROVEMENTS

Improvements

  • Updated DCSDK from 1.12.2 to 1.12.4

    • Change internal queue management for protecting against OOMK

    • Extracted ModuleThread structure from PullerAbstract

    • Improve Controlled stop when both processes fails to instantiate

    • Improve Controlled stop when InputProcess is killed

    • Fixed error related a ValueError exception not well controlled.

    • Fixed error related with loss of some values in internal messages

Update

v1.8.0

Status
colourGreen
titleIMPROVEMENTS

Status
colourRed
titleBUG FIXING

Improvements

  • Updated DCSDK from 1.11.1 to 1.12.2- Updated the DCSDK base image to 1.3.0.

Bug fixing

  • Fixed duplicated logs in event services.

Update

v1.7.0

Status
colourGreen
titleIMPROVEMENTS

Status
colourRed
titleBUG FIXING

Improvements

  • Add compatibility when reading configuration to accept older parameters.

Bug fixing

  • Fix a bug when getting the estream listing and improve the log message.

Update

v1.6.0

Status
colourGreen
titleIMPROVEMENTS

Improvements

  • Updated to DCSDK 1.11.1

    • Added extra check for not valid message timestamps

    •    Added extra check for improve the controlled stop

    •    Changed default number for connection retries (now 7)

    •    Fix for Devo connection retries

    • Updated DevoSDK to v5.1.9

    • Fixed some bug related to development on MacOS

    • Added an extra validation and fix when the DCSDK receives a wrong timestamp format

    • Added an optional config property for use the Syslog timestamp format in a strict way

    • Updated DevoSDK to v5.1.10

    • Fix for SyslogSender related to UTF-8

    • Enhance of troubleshooting. Trace Standardization, Some traces has been introduced.

    • Introduced a mechanism to detect "Out of Memory killer" situation

Update

v1.4.3

Status
colourGreen
titleIMPROVEMENTS

Improvements:

  • New functionality, access to File Vantage API

  • Updated DCSDK from 1.8.0 to 1.10.2:

    •   Upgrade internal dependencies

    •   Store lookup instances into DevoSender to avoid creation of new instances for the same lookup

    •   Ensure service_config is a dict into templates

    •   Ensure special characters are properly sent to the platform

    •   Changed log level to some messages from info to debug

    •   Changed some wrong log messages

    •   Upgraded some internal dependencies

    •   Changed queue passed to setup instance constructor

    • Added input metrics

    • Modified output metrics

    • Updated DevoSDK to version 5.1.6

    • Standardized exception messages for traceability

    • Added more detail in queue statistics

    • Updated PythonSDK to version 5.0.7

    • Introduced pyproject.toml

    • Added requirements.dev.txt

    • Fixed error in pyproject.toml related to project scripts endpoint

Recommended Version

Update

v1.4.2

Status
colourGreen
titleIMPROVEMENTS

Improvements:

  • Updated DCSDK from 1.7.2 to 1.8.0:

    • Ability to validate collector setup and exit without pulling any data.

    • Ability to store in the persistence the messages that couldn't be sent after the collector stopped.

    • Ability to send messages from the persistence when the collector starts and before the puller begins working.

    • Ensure special characters are properly sent to the platform.

Recommended Version

Update

v1.4.0

Status
colourGreen
titleIMPROVEMENTS

Status
colourRed
titleBUG FIXING

Improvements:

  • Added @devo_pulling_id field.

  • Update the `details` endpoint to use the v2 API (due to v1 deprecation)

Bug Fixing:

  • Fixed a bug that prevented overriding the base URL.

Recommended Version

Update

v1.3.1

Status
colourGreen
titleIMPROVEMENTS

Improvements:

  • The RegEx validation has been updated to enforce the HTTP[S] protocol for all services when this parameter is filled in by the user.

  • The Event Stream (eStream) service has been updated to use the same overriding parameter for the base_url than the other previous services. This allows to the user define this only one time for all available services through override_base_url user config file.

Recommended Version

Update

v1.3.0

Status
colourGreen
titleIMPROVEMENTS
Status
colourGreen
titleFEATURE

Improvements:

  • Upgraded underlay IFC SDK v1.3.0 to v1.4.0.

  • Updated the underlying DevoSDK package to v3.6.4 and dependencies, this upgrade increases the resilience of the collector when the connection with Devo or the Syslog server is lost. The collector is able to reconnect in some scenarios without running the self-kill feature.

  • Support for stopping the collector when a GRACEFULL_SHUTDOWN system signal is received.

  • Re-enabled the logging to devo.collector.out for Input threads.

  • Improved self-kill functionality behavior.

  • Added more details in log traces.

  • Added log traces for knowing system memory usage.

New Features:

  • CrowdStrike Event Stream (eStream) data source is now available. This service leverages the CrowdStrike Falcon Event Streams API to obtain the customer’s DataFeed URLs and continuosly fetch events that will be ingested under the edr.crowdstrike.falconstreaming.* family of tables. For more information, check the CrowdStrike’s official documentation.

Upgrade

v1.2.0

Status
colourGreen
titleIMPROVEMENTS

Status
colourYellow
titleVULNS

Improvements:

  • Upgraded underlay IFC SDK v1.1.3 to v1.3.0.

  • The resilience has been improved with a new feature that restart the collector when the Devo connections is lost and it cannot be recovered.

  • When an exception is raised by the Collector Setup, the collector retries after 5 seconds. For consecutive exceptions, the waiting time is multiplied by 5 until hits 1800 seconds, which is the maximum waiting time allowed. No maximum retries are applied.

  • When an exception is raised by the Collector Pull method, the collector retries after 5 seconds. For consecutive exceptions, the waiting time is multiplied by 5 until hits 1800 seconds, which is the maximum waiting time allowed. No maximum retries are applied.

  • When an exception is raised by the Collector pre-pull method, the collector retries after 30 seconds. No maximum retries are applied.

Upgrade

v1.1.0

Status
colourGreen
titleIMPROVEMENTS
Status
colourYellow
titleVULNS

Improvements:

  • The underlay IFC SDK has been updated from v1.1.2 to v1.1.3.

  • The resilience has been improved with a new feature that restart the collector when the Devo connections is lost and it cannot be recovered.

Vulnerabilities mitigation:

  • All critical and high vulnerabilities have been mitigated.

Upgrade

v1.0.0

Status
colourGreen
titleFEATURE

New Features:

  • Initial release that includes the following data sources from CrowdStrike API:

    • Hosts

    • Incidents

    • Vulnerabilities

    • Behaviors

Upgrade

-