Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Table of Contents
maxLevel2
typeflat

Overview

VMware Carbon Black Cloud Event Forwarder is a cloud-native endpoint security software that is designed to detect malicious behavior and help prevent malicious files from attacking an organization. It allows you to send data about alerts and events to an AWS S3 bucket where it can be reconfigured into other applications.

Devo collector features

Feature

Details

Allow parallel downloading (multipod)

  • Allowed

Running environments

  • Collector server

  • On-premise

Populated Devo events

  • Table

Flattening preprocessing

  • No

Data sources

Data source

Description

API endpoint

Collector service name

Devo table

Available from release

Event Forwarder

The Carbon Black Cloud Forwarder lets you send data about alerts and events to an AWS S3 bucket where it can be reconfigured to port into other applications in your security stack.

Data Forwarder Configuration API - Carbon Black Developer Network

AWS S3 bucket

event_forwarder

endpoint.vmware.cbc_event_forwarder

v1.0.0

endpoint.vmware.cbc_event_forwarder.cb_analytics

v1.0.0

endpoint.vmware.cbc_event_forwarder.endpoint_event_apicall

v1.0.0

endpoint.vmware.cbc_event_forwarder.endpoint_event_crossproc

v1.0.0

endpoint.vmware.cbc_event_forwarder.endpoint_event_fileless_scriptload

v1.0.0

endpoint.vmware.cbc_event_forwarder.endpoint_event_filemod

v1.0.0

endpoint.vmware.cbc_event_forwarder.endpoint_event_moduleload

v1.0.0

endpoint.vmware.cbc_event_forwarder.endpoint_event_netconn

v1.0.0

endpoint.vmware.cbc_event_forwarder.endpoint_event_procstart

v1.0.0

endpoint.vmware.cbc_event_forwarder.endpoint_event_procend

v1.0.0

endpoint.vmware.cbc_event_forwarder.endpoint_event_regmod

v1.0.0

endpoint.vmware.cbc_event_forwarder.endpoint_event_scriptload

v1.0.0

endpoint.vmware.cbc_event_forwarder.unknown

v1.0.0

endpoint.vmware.cbc_event_forwarder.kognos_alerts

v1.0.0

endpoint.vmware.cbc_event_forwarder.kognos_events

v1.0.0

Flattening preprocessing

Data source

Collector service

Optional

Source

Service

  • No

Vendor setup

There are some steps you need to follow in order to set up this collector:

  1. Log in with your credentials to the Carbon Black console.

    Image Modified
  2. Note your ORg Key on the top-left of the console.

    Image Modified
  3. Go to Settings → API Access.

    Image Modified
  4. Select the Access Level tab.

    Image Modified
  5. Click on Add Access Level on the top-right.

  6. Give it a unique name and a description.

  7. Scroll down in the table below and look for the Event forwarding category. Mark the columns as the image below and click Save.

    Image Modified
  8. Select the API Keys tab.

    Image Modified
  9. Click on Add API Key.

  10. Give it a unique name and the appropriate access levels. Select Custom so you can choose the Access Level you created before. Note - Choose a name to clearly distinguish the API from your other API Keys. You can also add Authorized IP addresses and a description to differentiate among other APIs.

  11. Click Save and your credentials will display.

    Image Modified
  12. You can view your credentials by opening the Actions drop-down and selecting API Credentials.

    Image Modified
  13. Create your forwarder using the following API. A successful creation will add a healthcheck.json file to your event folder in your S3 bucket.

    Image Modified
  14. Update your config.yalm with the appropriate values, including the AWS region and SQS qeue_name.

    Image Modified

Minimum configuration required for basic pulling

Although this collector supports advanced configuration, the fields required to retrieve data with basic configuration are defined below.

...

Info

See the Accepted authentication methods section to verify what settings are required based on the desired authentication method.

Accepted authentication methods

Run the collector

Once the data source is configured, you can either send us the required information if you want us to host and manage the collector for you (Cloud collector), or deploy and host the collector in your own machine using a Docker image (On-premise collector).

Rw ui tabs macro
Rw tab
titleOn-premise collector

This data collector can be run in any machine that has the Docker service available because it should be executed as a docker container. The following sections explain how to prepare all the required setup for having the data collector running.

Structure

The following directory structure should be created for being used when running the collector:

Code Block
<any_directory>
└── devo-collectors/
    └── <product_name>/
        ├── certs/
        │   ├── chain.crt
        │   ├── <your_domain>.key
        │   └── <your_domain>.crt
        ├── state/
        └── config/ 
            └── config.yaml 
Note

Replace <product_name> with the proper value.

Devo credentials

In Devo, go to Administration → Credentials → X.509 Certificates, download the Certificate, Private key and Chain CA and save them in <product_name>/certs/. Learn more about security credentials in Devo here.

Note

Replace <product_name> with the proper value.

Editing the config.yaml file

Code Block
globals:
  debug: false
  id: not_used
  name: cbc_collector
  persistence:
    type: filesystem
    config:
      directory_name: state

outputs:
  devo_1:
    type: devo_platform
    config:
      address: <devo_address>
      port: 443
      type: SSL
      chain: <chain_filename>
      cert: <cert_filename>
      key: <key_filename>
inputs:
  carbonblackcloud:
    id: <sort_unique_id>
    enabled: true
    requests_per_second: <requests_per_seconds>
    credentials:
      org_key: <org_key_value>
      aws_accesskey: <aws_access_key_value>
      aws_secretkey: <aws_secret_key_value>
    services:
      event_forwarder:
        aws_region: <aws_region>
        bucket_name: <bucket_name>
        queue_name: <queue_name>
        override_devo_tag: <override_devo_tag_value>
        kognos_categorization: <kognos_categorization_value>
        request_period_in_seconds: <request_period_in_seconds_value>
        override_files_per_request: <override_files_per_request_value>
Info

All defined service entities will be executed by the collector. If you do not want to run any of them, just remove the entity from the services object.

Replace the placeholders with your required values following the description table below:

Parameter

Data type

Type

Value range / Format

Details

<devo_address>

str

Mandatory

collector-us.devo.io
collector-eu.devo.io

Use this param to identify the Devo Cloud where the events will be sent.

<chain_filename>

str

Mandatory

Minimum length: 4
Maximum length: 20

Use this param to identify the chain.cert  file downloaded from your Devo domain. Usually this file's name is: chain.crt

<cert_filename>

str

Mandatory

Minimum length: 4
Maximum length: 20

Use this param to identify the file.cert downloaded from your Devo domain.

<key_filename>

str

Mandatory

Minimum length: 4
Maximum length: 20

Use this param to identify the file.key downloaded from your Devo domain.

<short_unique_id>

int

Mandatory

Minimum length: 1
Maximum length: 5

Use this param to give a unique id to this input service.

Note

This parameter is used to build the persistence address, do not use the same value for multiple collectors. It could cause a collision.

<requests_per_second>

int

Optional

Minimum value: 1

Customize the maximum number of API requests per second. If not used, the default setting will be used: 100000 requests/sec.

Info

This parameter should be removed if it is not used.

<org_key>

str

Mandatory

Minimum length: 1

This parameter is the Carbon Black Cloud organization key.

Info

for more information see: Carbon Black Cloud: Where is the Org Key Found?

<aws_accesskey>

str

Mandatory

Minimum length: 1

The AWS access key.

Info

for more information see: Understanding and getting your AWS credentials - AWS General Reference

<aws_secretkey>

str

Mandatory

Minimum length: 1

The AWS secret key.

Info

for more information see: Understanding and getting your AWS credentials - AWS General Reference

<aws_region>

str

Mandatory

Minimum length: 1

This parameter must be a list with valid target region names to be used when collecting data, it will be created one processing thread per region.

Info

More info about available regions at Regions, Availability Zones, and Local Zones - Amazon Relational Database Service

<bucket_name>

str

Mandatory

Minimum length: 1

The AWS s3 bucket name. Examples:

  • docexamplebucket1

  • log-delivery-march-2020

  • my-hosted-content

<queue_name>

str

Mandatory

Minimum length: 1

The AWS SQS queue name.

<override_devo_tag_value>

str

Optional

A Devo Tag. For more information see Devo Tags.

This parameter allows to define a custom devo tag. The default value is endpoint.vmware.cbc_event_forwarder.{mapping_type}.

Info

This parameter can be removed or commented.

<kognos_categorization_value>

bool

Optional

false / true

Set this parameter to True to use the same categorization that Kognos uses. Categorizes messages into alerts and events and discards those without the type field. The default value is false.

Message destination tables with Kognos categorization:

  • endpoint.vmware.cbc_event_forwarder.kognos_alerts

  • endpoint.vmware.cbc_event_forwarder.kognos_events

Info

This parameter can be removed or commented.

<request_period_in_seconds_value>

int

Optional

Minimum value: 1

The amount (in seconds) in which the service’s collection is scheduled. The default value is 15.

Info

This parameter can be removed or commented.

<override_files_per_request_value>

int

Optional

Minimum value: 1

Maximum value: 10

This parameter indicates the number of files that are going to be taken from the queue per request. The default value is 10.

Info

This parameter can be removed or commented.

Download the Docker image

The collector should be deployed as a Docker container. Download the Docker image of the collector as a .tgz file by clicking the link in the following table:

Collector Docker image

SHA-256 hash

collector-vmware_carbonblackcloud_event_forwarder_if-docker-image-1.0.0

c029326af6f6302b19bac9110949c41bf68fda70df4686b3973d7e6b37e0646b

Use the following command to add the Docker image to the system:

Code Block
gunzip -c <image_file>-<version>.tgz | docker load
Note

Once the Docker image is imported, it will show the real name of the Docker image (including version info). Replace <image_file> and <version> with a proper value.

The Docker image can be deployed on the following services:

Docker

Execute the following command on the root directory <any_directory>/devo-collectors/<product_name>/

Code Block
docker run 
--name collector-<product_name> 
--volume $PWD/certs:/devo-collector/certs 
--volume $PWD/config:/devo-collector/config 
--volume $PWD/state:/devo-collector/state 
--env CONFIG_FILE=config.yaml 
--rm 
--interactive 
--tty 
<image_name>:<version>
Note

Replace <product_name>, <image_name> and <version> with the proper values.

Docker Compose

The following Docker Compose file can be used to execute the Docker container. It must be created in the <any_directory>/devo-collectors/<product_name>/ directory.

Code Block
version: '3'
services:
  collector-<product_name>:
    image: <image_name>:${IMAGE_VERSION:-latest}
    container_name: collector-<product_name>
    volumes:
      - ./certs:/devo-collector/certs
      - ./config:/devo-collector/config
      - ./credentials:/devo-collector/credentials
      - ./state:/devo-collector/state
    environment:
      - CONFIG_FILE=${CONFIG_FILE:-config.yaml}

To run the container using docker-compose, execute the following command from the <any_directory>/devo-collectors/<product_name>/ directory:

Code Block
IMAGE_VERSION=<version> docker-compose up -d
Note

Replace <product_name>, <image_name> and <version> with the proper values.

Rw tab
titleCloud collector

We use a piece of software called Collector Server to host and manage all our available collectors. If you want us to host this collector for you, get in touch with us and we will guide you through the configuration.

Collector services detail

This section is intended to explain how to proceed with specific actions for services.

Events service

Expand
titleVerify data collection

Once the collector has been launched, it is important to check if the ingestion is performed in a proper way. To do so, go to the collector’s logs console.

This service has the following components:

Component

Description

Setup

The setup module is in charge of authenticating the service and managing the token expiration when needed.

Puller

The setup module is in charge of pulling the data in a organized way and delivering the events via SDK.

Setup output

A successful run has the following output messages for the setup module:

Code Block
INFO InputProcess::MainThread -> CarbonBlackCloudPullerSetup(cbc_collector,carbonblackcloud#12345,event_forwarder#predefined) -> Starting thread
INFO InputProcess::CarbonBlackCloudPullerSetup(cbc_collector,carbonblackcloud#12345,event_forwarder#predefined) -> CarbonBlackCloudPullerSetup(cbc_collector,carbonblackcloud#12345,event_forwarder#predefined) Starting the execution of setup()
INFO InputProcess::CarbonBlackCloudPullerSetup(cbc_collector,carbonblackcloud#12345,event_forwarder#predefined) -> Setting up Event Forwarder puller, performing a test request to the API to check if the credentials provided are valid.
INFO InputProcess::CarbonBlackCloudPullerSetup(cbc_collector,carbonblackcloud#12345,event_forwarder#predefined) -> The AWS servers have been reached with no issues. Proceeding to test access to the SQS and S3 services
INFO InputProcess::CarbonBlackCloudPullerSetup(cbc_collector,carbonblackcloud#12345,event_forwarder#predefined) -> S3 Bucket kognos-devo-7desj9gn-cb is configured to send ['s3:ObjectCreated:*'] to SQS queue: kognos-devo-cbq.
INFO InputProcess::CarbonBlackCloudPullerSetup(cbc_collector,carbonblackcloud#12345,event_forwarder#predefined) -> There were no errors while accessing the Event forwarder service with the provided API access and secret key, proceeding to pull the data.
INFO InputProcess::CarbonBlackCloudPullerSetup(cbc_collector,carbonblackcloud#12345,event_forwarder#predefined) -> CarbonBlackCloudPullerSetup(cbc_collector,carbonblackcloud#12345,event_forwarder#predefined) Finalizing the execution of setup()
INFO InputProcess::CarbonBlackCloudPullerSetup(cbc_collector,carbonblackcloud#12345,event_forwarder#predefined) -> Setup for module <CarbonBlackCloudEventForwarderPuller> has been successfully executed

Puller output

A successful initial run has the following output messages for the puller module:

Info

Note that the PrePull action is executed only one time before the first run of the Pull action.

Code Block
INFO InputProcess::CarbonBlackCloudEventForwarderPuller(carbonblackcloud,12345,event_forwarder,predefined) -> Pull Started. Retrieving timestamp: 2022-09-26 08:01:02.286976+00:00
INFO InputProcess::CarbonBlackCloudEventForwarderPuller(carbonblackcloud,12345,event_forwarder,predefined) -> Retrieving Queue with name: name
INFO InputProcess::CarbonBlackCloudEventForwarderPuller(carbonblackcloud,12345,event_forwarder,predefined) -> Retrieving Messages form Queue with name: name
INFO InputProcess::CarbonBlackCloudEventForwarderPuller(carbonblackcloud,12345,event_forwarder,predefined) -> Number of files detected through the queue: 1
INFO InputProcess::CarbonBlackCloudEventForwarderPuller(carbonblackcloud,12345,event_forwarder,predefined) -> Events sent for tag "endpoint.vmware.cbc_event_forwarder.endpoint_event_filemod": 8
INFO InputProcess::CarbonBlackCloudEventForwarderPuller(carbonblackcloud,12345,event_forwarder,predefined) -> Events sent for tag "endpoint.vmware.cbc_event_forwarder.endpoint_event_procstart": 1
INFO InputProcess::CarbonBlackCloudEventForwarderPuller(carbonblackcloud,12345,event_forwarder,predefined) -> Events sent for tag "endpoint.vmware.cbc_event_forwarder.endpoint_event_crossproc": 2
INFO InputProcess::CarbonBlackCloudEventForwarderPuller(carbonblackcloud,12345,event_forwarder,predefined) -> Events sent for tag "endpoint.vmware.cbc_event_forwarder.endpoint_event_moduleload": 35
INFO InputProcess::CarbonBlackCloudEventForwarderPuller(carbonblackcloud,12345,event_forwarder,predefined) -> Events sent for tag "endpoint.vmware.cbc_event_forwarder.endpoint_event_procend": 1
INFO InputProcess::CarbonBlackCloudEventForwarderPuller(carbonblackcloud,12345,event_forwarder,predefined) -> (Partial) Statistics for this pull cycle (@devo_pulling_id=1664172062843) so far:  Number of requests made: 2; Number of files processed: 1/1; Number of files filtered out: 0; Number of events filtered: 0; Number of events generated and sent: 47; 
INFO InputProcess::CarbonBlackCloudEventForwarderPuller(carbonblackcloud,12345,event_forwarder,predefined) -> Statistics for this pull cycle (@devo_pulling_id=1664172062843): Number of requests made: 2; Number of files processed: 1/1; Number of files filtered out: 0; Number of events filtered: 0; Number of events generated and sent: 47; Average of events per second: 93.91 Elapsed in seconds: 0.5

After a successful collector’s execution (that is, no error logs found), you will see the following log message:

Code Block
INFO InputProcess::CarbonBlackCloudEventForwarderPuller(carbonblackcloud,12345,event_forwarder,predefined) -> Statistics for this pull cycle (@devo_pulling_id=1664172062843): Number of requests made: 2; Number of files processed: 1/1; Number of files filtered out: 0; Number of events filtered: 0; Number of events generated and sent: 47; Average of events per second: 93.91 Elapsed in seconds: 0.5
Info

The value @devo_pulling_id is injected in each event to group all events ingested by the same pull action. You can use it to get the exact events downloaded in that Pull action in Devo’s search window.

...

Expand
titleTroubleshooting

This collector has different security layers that detect both an invalid configuration and abnormal operation. This table will help you detect and resolve the most common errors.

Error type

Error ID

Error message

Cause

Solution

InitVariablesError

1

module_properties" setting from "module_definition" has not been found in <collector_definitions> file. This setting is mandatory. Execution aborted

This error is raised when module_properties property is not found in collector_definitions.yaml.

This is an internal issue. Contact with Devo Support team.

2

module_properties" setting from "module_definition" section in <collector_definitions> file should be a <dict> instance not <{type(module_properties)}>. Execution aborted.

This error is raised when module_properties is defined in collector_definitions.yaml but the format is not dict.

This is an internal issue. Contact with Devo Support team.

3

"base_tag" setting has not been found as key of base_tag. This setting is mandatory in collector_definitions. Execution aborted.

This error is raised when base_tag property is not found in collector_definitions.yaml.

This is an internal issue. Contact with Devo Support team.

4

"base_tag" setting should be a str instance not base_tag. Execution aborted.

This error is raised when base_tag is defined in collector_definitions.yaml but the format is not str

This is an internal issue. Contact with Devo Support team.

5

"files_per_request" setting has not been found as key of files_per_request. This setting is mandatory in collector_definitions. Execution aborted.

This error is raised when files_per_request property is not found in collector_definitions.yaml.

This is an internal issue. Contact with Devo Support team.

6

"files_per_request" setting should be a int instance not files_per_request. Execution aborted.

This error is raised when files_per_request is defined in collector_definitions.yaml but the format is not int

This is an internal issue. Contact with Devo Support team.

7

"files_per_request" cannot be less than 0. Change value of the "files_per_request" in the configuration to a number greater than or equal to 0

This error is raised when files_per_request is defined in collector_definitions.yaml but is less than 0.

This is an internal issue. Contact with Devo Support team.

8

"kognos_categorization" setting has not been found as key of kognos_categorization. This setting is mandatory in collector_definitions. Execution aborted.

This error is raised when kognos_categorization property is not found in collector_definitions.yaml.

This is an internal issue. Contact with Devo Support team.

9

"kognos_categorization" setting should be a bool instance not kognos_categorization. Execution aborted.

This error is raised when kognos_categorization is defined in collector_definitions.yaml but the format is not bool

This is an internal issue. Contact with Devo Support team.

11

"kognos_alerts_tag" setting has not been found as key of kognos_alerts_tag. This setting is mandatory in collector_definitions. Execution aborted.

This error is raised when kognos_alerts_tag property is not found in collector_definitions.yaml.

This is an internal issue. Contact with Devo Support team.

12

"kognos_alerts_tag" setting should be a str instance not kognos_alerts_tag. Execution aborted.

This error is raised when kognos_alerts_tag is defined in collector_definitions.yaml but the format is not str

This is an internal issue. Contact with Devo Support team.

14

"kognos_events_tag" setting has not been found as key of kognos_events_tag. This setting is mandatory in collector_definitions. Execution aborted.

This error is raised when kognos_events_tag property is not found in collector_definitions.yaml.

This is an internal issue. Contact with Devo Support team.

16

"kognos_events_tag" setting should be a str instance not kognos_events_tag. Execution aborted.

This error is raised when kognos_events_tag is defined in collector_definitions.yaml but the format is not str

This is an internal issue. Contact with Devo Support team.

17

"credentials" setting from "input_config" has not been found in configuration file. This setting is mandatory. Execution aborted.

This error is raised when the required property credentials is not found in the configuration file.

Add credentials dictionary in the configuration file, including client_id and client_secret fields.

18

"credentials" setting from "input_config" section in configuration file should be a <dict> instance not <{type(credentials)}>. Execution aborted.

This error is raised when credentials is defined in theconfiguration file but the format is not dict.

Edit the value of credentials in the configuration file so it is of type dict.

19

"aws_accesskey" setting has not been found as key of aws_accesskey. This setting is mandatory in configuration file. Execution aborted.

This error is raised when the required property aws_accesskey is not found in the configuration file, into credentials dictionary.

Add aws_accesskey property in the configuration file, into credentials dictionary.

20

"aws_accesskey" setting should be a str instance not aws_accesskey. Execution aborted.

This error is raised when aws_accesskey is defined in the configuration file but the format is not str.

Edit the value of aws_accesskey in the configuration file, into credentials dictionary, so it is of type str.

21

"aws_secretkey" setting has not been found as key of aws_secretkey. This setting is mandatory in configuration file. Execution aborted.

This error is raised when the required property aws_secretkey is not found in the configuration file, into credentials dictionary.

Add aws_secretkey property in the configuration file, into credentials dictionary.

22

"aws_secretkey" setting should be a str instance not aws_secretkey. Execution aborted.

This error is raised when aws_secretkey is defined in theconfiguration file but the format is not str.

Edit the value of aws_secretkey in theconfiguration file, into credentials dictionary, so it is of type str.

23

"org_key" setting has not been found as key of org_key. This setting is mandatory in configuration file. Execution aborted.

This error is raised when the required property org_key is not found in the configuration file, into credentials dictionary.

Add org_key property in the configuration file, into credentials dictionary.

24

"org_key" setting should be a str instance not org_key. Execution aborted.

This error is raised when org_key is defined in the configuration file but the format is not str.

Edit the value of org_key in the configuration file, into credentials dictionary, so it is of type str.

25

"services" setting from "input_config" has not been found in configuration file. This setting is mandatory. Execution aborted.

This error is raised when the required property services is not found in the configuration file.

Add services dictionary in the configuration file.

26

"services" setting from "input_config" section in {user_config} file should be a <dict> instance 'f'not <{type(services)}>. Execution aborted.

This error is raised when services is defined in the configuration file but the format is not dict.

Edit the value of services in the configuration file so it is of type dict.

27

override_files_per_request" setting from "services" section in {user_config} file should be a <int> instance not <{type(override_files_per_request)}>. Execution aborted.

This error is raised when optional value override_files_per_request added in the configuration file is not of type int.

Edit the value of override_files_per_request in the configuration file so it is of type int.

28

"override_devo_tag" setting from "services" section in {user_config} file should be a <str> instance not <{type(override_devo_tag)}>. Execution aborted.

This error is raised when optional value override_devo_tag added in the configuration file is not of type str.

Edit the value of override_devo_tag in the configuration file so it is of type str.

29

"override_kognos_categorization" setting from "services" section in {service_config} file should be a <bool> instance not <{type(override_devo_tag)}>. Execution aborted.

This error is raised when optional value override_kognos_categorization added in the configuration file is not of type str.

Edit the value of override_kognos_categorization in theconfiguration file so it is of type str.

30

"aws_region" setting should be a str instance not aws_region. Execution aborted.

This error is raised when the required property aws_region is not found in the configuration file.

Add aws_region property in config.json..

31

"aws_region" setting has not been found as key of aws_region. This setting is mandatory in configuration file. Execution aborted.

This error is raised when aws_region is defined in the configuration file but the format is not str.

Edit the value of aws_region in the configuration fileso it is of type str.

32

"bucket_name" setting should be a str instance not bucket_name. Execution aborted.

This error is raised when the required property bucket_name is not found in the configuration file.

Add bucket_name property in the configuration file.

33

"bucket_name" setting has not been found as key of bucket_name. This setting is mandatory in configuration file. Execution aborted.

This error is raised when bucket_name is defined in the configuration file but the format is not str.

Edit the value of bucket_name in theconfiguration fileso it is of type str.

34

"queue_name" setting should be a str instance not queue_name. Execution aborted.

This error is raised when the required property queue_name is not found in the configuration file.

Add queue_name property in config.json..

35

"queue_name" setting has not been found as key of queue_name. This setting is mandatory in configuration file. Execution aborted.

This error is raised when queue_name is defined in theconfiguration file but the format is not str.

Edit the value of queue_name in the configuration fileso it is of type str.

SetupError

104

There was an error reaching the AWS server.

This error is raised when an error occurred connecting to the AWS server.

Check that the internet connection is working properly. If the problem persists contact with Devo Support team.

105

<error_message>

This error is raised when a 401 API code is raised. If you've just logged in and received the 401 unauthorized error, it means that the credentials you entered were invalid for some reason

Check that the credentials are correct. If the problem persists contact with Devo Support team.

106

<error_message>

This error is raised when an unknown HTTP error occurs.

Contact with Devo Support team.

107

The desired bucket has not been configured to send its events to <queue_name> Queue. Please configure this in the Bucket options before running the collector.

This error is raised when the s3 bucket is not configured to send events to the AWS queue.

Configure the s3 bucket not to send events to the AWS queue.

108

<error_message>

This error is raised when a 413 API code is raised, it occurs when the size of a client's request exceeds the server's file size limit.

Contact with Devo Support team.

109

There are no folders in the S3 bucket that match the org key provided.

This error is raised when the org_key does not match the key of the queue files.

Set a correct org_key in the org_key parameter of the configuration file.

110

Unable to get QueueConfigurations from bucket. Please configure the S3 trigger by selecting the S3 bucket you created earlier'

This error is raised when S3 trigger is not configured to receive bucket configurations.

Configure the S3 trigger to activate QueueConfigurations.

Collector operations

This section is intended to explain how to proceed with specific operations of this collector.

Expand
titleVerify collector operations

Initialization

The initialization module is in charge of setup and running the input (pulling logic) and output (delivering logic) services and validating the given configuration.

A successful run has the following output messages for the initializer module:

Code Block
INFO MainThread -> (CollectorMultithreadingQueue) standard_queue_multithreading -> max_size_in_messages: 10000, max_size_in_mb: 1024, max_wrap_size_in_items: 100
WARNING MainThread -> [INTERNAL LOGIC] DevoSender::_validate_kwargs_for_method__init__ -> The <address> does not appear to be an IP address and cannot be verified: collector-us.devo.io
WARNING MainThread -> [OUTPUT] OutputLookupSenders -> <threshold_for_using_gzip_in_transport_layer> setting has been modified from 1.1 to 1.0 due to this configuration increases the Lookup sender performance.
WARNING MainThread -> [INTERNAL LOGIC] DevoSender::_validate_kwargs_for_method__init__ -> The <address> does not appear to be an IP address and cannot be verified: collector-us.devo.io
INFO MainThread -> [OUTPUT] OutputMultithreadingController(threatquotient_collector) -> Starting thread
INFO MainThread -> [OUTPUT] DevoSender(standard_senders,devo_sender_0) -> Starting thread
INFO MainThread -> [OUTPUT] DevoSenderManagerMonitor(standard_senders,devo_1) -> Starting thread (every 600 seconds)
INFO MainThread -> [OUTPUT] DevoSenderManager(standard_senders,manager,devo_1)(devo_1) -> Starting thread
INFO MainThread -> [OUTPUT] DevoSender(lookup_senders,devo_sender_0) -> Starting thread
INFO MainThread -> [OUTPUT] DevoSenderManagerMonitor(lookup_senders,devo_1) -> Starting thread (every 600 seconds)
INFO MainThread -> [OUTPUT] DevoSenderManager(lookup_senders,manager,devo_1)(devo_1) -> Starting thread
INFO MainThread -> InitVariables Started
INFO MainThread -> start_time_value initialized
INFO MainThread -> verify_host_ssl_cert initialized
INFO MainThread -> event_fetch_limit_in_items initialized
INFO MainThread -> InitVariables Terminated
INFO MainThread -> [INPUT] InputMultithreadingController(threatquotient_collector) - Starting thread (executing_period=300s)
INFO MainThread -> [INPUT] InputThread(threatquotient_collector,threatquotient_data_puller#111) - Starting thread (execution_period=600s)
INFO MainThread -> [INPUT] ServiceThread(threatquotient_collector,threatquotient_data_puller#111,events#predefined) - Starting thread (execution_period=600s)
INFO MainThread -> [SETUP] ThreatQuotientDataPullerSetup(threatquotient_collector,threatquotient_data_puller#111,events#predefined) - Starting thread
INFO MainThread -> [INPUT] ThreatQuotientDataPuller(threatquotient_collector,threatquotient_data_puller#111,events#predefined) - Starting thread

Events delivery and Devo ingestion

The event delivery module is in charge of receiving the events from the internal queues where all events are injected by the pullers and delivering them using the selected compatible delivery method.

A successful run has the following output messages for the initializer module:

Code Block
INFO OutputProcess::SyslogSenderManagerMonitor(standard_senders,sidecar_0) -> Number of available senders: 1, sender manager internal queue size: 0
INFO OutputProcess::SyslogSenderManagerMonitor(standard_senders,sidecar_0) -> enqueued_elapsed_times_in_seconds_stats: {}
INFO OutputProcess::SyslogSenderManagerMonitor(standard_senders,sidecar_0) -> Sender: SyslogSender(standard_senders,syslog_sender_0), status: {"internal_queue_size": 0, "is_connection_open": True}
INFO OutputProcess::SyslogSenderManagerMonitor(standard_senders,sidecar_0) -> Standard - Total number of messages sent: 44, messages sent since "2022-06-28 10:39:22.511671+00:00": 44 (elapsed 0.007 seconds)
INFO OutputProcess::SyslogSenderManagerMonitor(internal_senders,sidecar_0) -> Number of available senders: 1, sender manager internal queue size: 0
INFO OutputProcess::SyslogSenderManagerMonitor(internal_senders,sidecar_0) -> enqueued_elapsed_times_in_seconds_stats: {}
INFO OutputProcess::SyslogSenderManagerMonitor(internal_senders,sidecar_0) -> Sender: SyslogSender(internal_senders,syslog_sender_0), status: {"internal_queue_size": 0, "is_connection_open": True}
INFO OutputProcess::SyslogSenderManagerMonitor(internal_senders,sidecar_0) -> Internal - Total number of messages sent: 1, messages sent since "2022-06-28 10:39:22.516313+00:00": 1 (elapsed 0.019 seconds)
Info

By default, these information traces will be displayed every 10 minutes.

Sender services

The Integrations Factory Collector SDK has 3 different senders services depending on the event type to delivery (internal, standard, and lookup). This collector uses the following Sender Services:

Sender services

Description

internal_senders

In charge of delivering internal metrics to Devo such as logging traces or metrics.

standard_senders

In charge of delivering pulled events to Devo.

Sender statistics

Each service displays its own performance statistics that allow checking how many events have been delivered to Devo by type:

Logging trace

Description

Number of available senders: 1

Displays the number of concurrent senders available for the given Sender Service.

sender manager internal queue size: 0

Displays the items available in the internal sender queue.

Info

This value helps detect bottlenecks and needs to increase the performance of data delivery to Devo. This last can be made by increasing the concurrent senders.

Total number of messages sent: 44, messages sent since "2022-06-28 10:39:22.511671+00:00": 21 (elapsed 0.007 seconds)

Displayes the number of events from the last time and following the given example, the following conclusions can be obtained:

  • 44 events were sent to Devo since the collector started.

  • The last checkpoint timestamp was 2022-06-28 10:39:22.511671+00:00.

  • 21 events where sent to Devo between the last UTC checkpoint and now.

  • Those 21 events required 0.007 seconds to be delivered.

Info

By default these traces will be shown every 10 minutes.

...

Expand
titleEnable/disable the logging debug mode

Sometimes it is necessary to activate the debug mode of the collector's logging. This debug mode increases the verbosity of the log and allows you to print execution traces that are very helpful in resolving incidents or detecting bottlenecks in heavy download processes.

  • To enable this option you just need to edit the configuration file and change the debug_status parameter from false to true and restart the collector.

  • To disable this option, you just need to update the configuration file and change the debug_status parameter from true to false and restart the collector.

For more information, visit the configuration and parameterization section corresponding to the chosen deployment mode.

Change log for v1.x.x

Release

Released on

Release type

Details

Recommendations

v1.0.0

Status
colourPurple
titleNEW FEATURE

New features:

  • CBC Event Forwarder ingestion through S3+SQS (AWS Platform)

  • Two ways tag mapping:

    • Grouping by events and alerts (compatible with Kognos data feed requirements)

      • endpoint.vmware.cbc_event_forwarder.kognos_alerts

      • endpoint.vmware.cbc_event_forwarder.kognos_events

    • Grouping by event type:

      • endpoint.vmware.cbc_event_forwarder

      • endpoint.vmware.cbc_event_forwarder.{type}

Recommended version

 

 

...