Document toolboxDocument toolbox

VMware Carbon Black Cloud Event Forwarder collector

Overview

VMware Carbon Black Cloud Event Forwarder is a cloud-native endpoint security software that is designed to detect malicious behavior and help prevent malicious files from attacking an organization. It allows you to send data about alerts and events to an AWS S3 bucket where it can be reconfigured into other applications.

Devo collector features

Feature

Details

Feature

Details

Allow parallel downloading (multipod)

  • Allowed

Running environments

  • Collector server

  • On-premise

Populated Devo events

  • Table

Flattening preprocessing

  • No

Data sources

Data source

Description

API endpoint

Collector service name

Devo table

Available from release

Data source

Description

API endpoint

Collector service name

Devo table

Available from release

Event Forwarder

The Carbon Black Cloud Forwarder lets you send data about alerts and events to an AWS S3 bucket where it can be reconfigured to port into other applications in your security stack.

Data Forwarder Configuration API - Carbon Black Developer Network

AWS S3 bucket

event_forwarder

endpoint.vmware.cbc_event_forwarder

v1.0.0

endpoint.vmware.cbc_event_forwarder.cb_analytics

v1.0.0

endpoint.vmware.cbc_event_forwarder.endpoint_event_apicall

v1.0.0

endpoint.vmware.cbc_event_forwarder.endpoint_event_crossproc

v1.0.0

endpoint.vmware.cbc_event_forwarder.endpoint_event_fileless_scriptload

v1.0.0

endpoint.vmware.cbc_event_forwarder.endpoint_event_filemod

v1.0.0

endpoint.vmware.cbc_event_forwarder.endpoint_event_moduleload

v1.0.0

endpoint.vmware.cbc_event_forwarder.endpoint_event_netconn

v1.0.0

endpoint.vmware.cbc_event_forwarder.endpoint_event_procstart

v1.0.0

endpoint.vmware.cbc_event_forwarder.endpoint_event_procend

v1.0.0

endpoint.vmware.cbc_event_forwarder.endpoint_event_regmod

v1.0.0

endpoint.vmware.cbc_event_forwarder.endpoint_event_scriptload

v1.0.0

endpoint.vmware.cbc_event_forwarder.unknown

v1.0.0

endpoint.vmware.cbc_event_forwarder.kognos_alerts

v1.0.0

endpoint.vmware.cbc_event_forwarder.kognos_events

v1.0.0

Flattening preprocessing

Data source

Collector service

Optional

Data source

Collector service

Optional

Source

Service

  • No

Vendor setup

There are some steps you need to follow in order to set up this collector:

  1. Log in with your credentials to the Carbon Black console.

  2. Note your ORg Key on the top-left of the console.

  3. Go to Settings → API Access.

  4. Select the Access Level tab.

  5. Click on Add Access Level on the top-right.

  6. Give it a unique name and a description.

  7. Scroll down in the table below and look for the Event forwarding category. Mark the columns as the image below and click Save.

  8. Select the API Keys tab.

  9. Click on Add API Key.

  10. Give it a unique name and the appropriate access levels. Select Custom so you can choose the Access Level you created before. Note - Choose a name to clearly distinguish the API from your other API Keys. You can also add Authorized IP addresses and a description to differentiate among other APIs.

  11. Click Save and your credentials will display.

  12. You can view your credentials by opening the Actions drop-down and selecting API Credentials.

  13. Create your forwarder using the following API. A successful creation will add a healthcheck.json file to your event folder in your S3 bucket.

  14. Update your config.yalm with the appropriate values, including the AWS region and SQS qeue_name.

Minimum configuration required for basic pulling

Although this collector supports advanced configuration, the fields required to retrieve data with basic configuration are defined below.

This minimum configuration refers exclusively to those specific parameters of this integration. There are more required parameters related to the generic behavior of the collector. Check setting sections for details.

Setting

Details

Setting

Details

org_key

This parameter is the Carbon Black Cloud organization key.

aws_accesskey

The AWS access key.

aws_secretkey

The AWS secret key.

aws_region

This parameter must be a list with valid target region names to be used when collecting data, it will be created one processing thread per region.

bucket_name

The AWS s3 bucket name. Examples:

  • docexamplebucket1

  • log-delivery-march-2020

  • my-hosted-content

queue_name

The AWS SQS queue name.

Accepted authentication methods

Run the collector

Once the data source is configured, you can either send us the required information if you want us to host and manage the collector for you (Cloud collector), or deploy and host the collector in your own machine using a Docker image (On-premise collector).

Collector services detail

This section is intended to explain how to proceed with specific actions for services.

Events service

Once the collector has been launched, it is important to check if the ingestion is performed in a proper way. To do so, go to the collector’s logs console.

This service has the following components:

Component

Description

Component

Description

Setup

The setup module is in charge of authenticating the service and managing the token expiration when needed.

Puller

The setup module is in charge of pulling the data in a organized way and delivering the events via SDK.

Setup output

A successful run has the following output messages for the setup module:

INFO InputProcess::MainThread -> CarbonBlackCloudPullerSetup(cbc_collector,carbonblackcloud#12345,event_forwarder#predefined) -> Starting thread INFO InputProcess::CarbonBlackCloudPullerSetup(cbc_collector,carbonblackcloud#12345,event_forwarder#predefined) -> CarbonBlackCloudPullerSetup(cbc_collector,carbonblackcloud#12345,event_forwarder#predefined) Starting the execution of setup() INFO InputProcess::CarbonBlackCloudPullerSetup(cbc_collector,carbonblackcloud#12345,event_forwarder#predefined) -> Setting up Event Forwarder puller, performing a test request to the API to check if the credentials provided are valid. INFO InputProcess::CarbonBlackCloudPullerSetup(cbc_collector,carbonblackcloud#12345,event_forwarder#predefined) -> The AWS servers have been reached with no issues. Proceeding to test access to the SQS and S3 services INFO InputProcess::CarbonBlackCloudPullerSetup(cbc_collector,carbonblackcloud#12345,event_forwarder#predefined) -> S3 Bucket kognos-devo-7desj9gn-cb is configured to send ['s3:ObjectCreated:*'] to SQS queue: kognos-devo-cbq. INFO InputProcess::CarbonBlackCloudPullerSetup(cbc_collector,carbonblackcloud#12345,event_forwarder#predefined) -> There were no errors while accessing the Event forwarder service with the provided API access and secret key, proceeding to pull the data. INFO InputProcess::CarbonBlackCloudPullerSetup(cbc_collector,carbonblackcloud#12345,event_forwarder#predefined) -> CarbonBlackCloudPullerSetup(cbc_collector,carbonblackcloud#12345,event_forwarder#predefined) Finalizing the execution of setup() INFO InputProcess::CarbonBlackCloudPullerSetup(cbc_collector,carbonblackcloud#12345,event_forwarder#predefined) -> Setup for module <CarbonBlackCloudEventForwarderPuller> has been successfully executed

Puller output

A successful initial run has the following output messages for the puller module:

INFO InputProcess::CarbonBlackCloudEventForwarderPuller(carbonblackcloud,12345,event_forwarder,predefined) -> Pull Started. Retrieving timestamp: 2022-09-26 08:01:02.286976+00:00 INFO InputProcess::CarbonBlackCloudEventForwarderPuller(carbonblackcloud,12345,event_forwarder,predefined) -> Retrieving Queue with name: name INFO InputProcess::CarbonBlackCloudEventForwarderPuller(carbonblackcloud,12345,event_forwarder,predefined) -> Retrieving Messages form Queue with name: name INFO InputProcess::CarbonBlackCloudEventForwarderPuller(carbonblackcloud,12345,event_forwarder,predefined) -> Number of files detected through the queue: 1 INFO InputProcess::CarbonBlackCloudEventForwarderPuller(carbonblackcloud,12345,event_forwarder,predefined) -> Events sent for tag "endpoint.vmware.cbc_event_forwarder.endpoint_event_filemod": 8 INFO InputProcess::CarbonBlackCloudEventForwarderPuller(carbonblackcloud,12345,event_forwarder,predefined) -> Events sent for tag "endpoint.vmware.cbc_event_forwarder.endpoint_event_procstart": 1 INFO InputProcess::CarbonBlackCloudEventForwarderPuller(carbonblackcloud,12345,event_forwarder,predefined) -> Events sent for tag "endpoint.vmware.cbc_event_forwarder.endpoint_event_crossproc": 2 INFO InputProcess::CarbonBlackCloudEventForwarderPuller(carbonblackcloud,12345,event_forwarder,predefined) -> Events sent for tag "endpoint.vmware.cbc_event_forwarder.endpoint_event_moduleload": 35 INFO InputProcess::CarbonBlackCloudEventForwarderPuller(carbonblackcloud,12345,event_forwarder,predefined) -> Events sent for tag "endpoint.vmware.cbc_event_forwarder.endpoint_event_procend": 1 INFO InputProcess::CarbonBlackCloudEventForwarderPuller(carbonblackcloud,12345,event_forwarder,predefined) -> (Partial) Statistics for this pull cycle (@devo_pulling_id=1664172062843) so far: Number of requests made: 2; Number of files processed: 1/1; Number of files filtered out: 0; Number of events filtered: 0; Number of events generated and sent: 47; INFO InputProcess::CarbonBlackCloudEventForwarderPuller(carbonblackcloud,12345,event_forwarder,predefined) -> Statistics for this pull cycle (@devo_pulling_id=1664172062843): Number of requests made: 2; Number of files processed: 1/1; Number of files filtered out: 0; Number of events filtered: 0; Number of events generated and sent: 47; Average of events per second: 93.91 Elapsed in seconds: 0.5

After a successful collector’s execution (that is, no error logs found), you will see the following log message:

INFO InputProcess::CarbonBlackCloudEventForwarderPuller(carbonblackcloud,12345,event_forwarder,predefined) -> Statistics for this pull cycle (@devo_pulling_id=1664172062843): Number of requests made: 2; Number of files processed: 1/1; Number of files filtered out: 0; Number of events filtered: 0; Number of events generated and sent: 47; Average of events per second: 93.91 Elapsed in seconds: 0.5

This collector does not use persistence because it consumes events from an AWS SQS queue.

This collector has different security layers that detect both an invalid configuration and abnormal operation. This table will help you detect and resolve the most common errors.

Error type

Error ID

Error message

Cause

Solution

Error type

Error ID

Error message

Cause

Solution

InitVariablesError

1

module_properties" setting from "module_definition" has not been found in <collector_definitions> file. This setting is mandatory. Execution aborted

This error is raised when module_properties property is not found in collector_definitions.yaml.

This is an internal issue. Contact with Devo Support team.

2

module_properties" setting from "module_definition" section in <collector_definitions> file should be a <dict> instance not <{type(module_properties)}>. Execution aborted.

This error is raised when module_properties is defined in collector_definitions.yaml but the format is not dict.

This is an internal issue. Contact with Devo Support team.

3

"base_tag" setting has not been found as key of base_tag. This setting is mandatory in collector_definitions. Execution aborted.

This error is raised when base_tag property is not found in collector_definitions.yaml.

This is an internal issue. Contact with Devo Support team.

4

"base_tag" setting should be a str instance not base_tag. Execution aborted.

This error is raised when base_tag is defined in collector_definitions.yaml but the format is not str

This is an internal issue. Contact with Devo Support team.

5

"files_per_request" setting has not been found as key of files_per_request. This setting is mandatory in collector_definitions. Execution aborted.

This error is raised when files_per_request property is not found in collector_definitions.yaml.

This is an internal issue. Contact with Devo Support team.

6

"files_per_request" setting should be a int instance not files_per_request. Execution aborted.

This error is raised when files_per_request is defined in collector_definitions.yaml but the format is not int

This is an internal issue. Contact with Devo Support team.

7

"files_per_request" cannot be less than 0. Change value of the "files_per_request" in the configuration to a number greater than or equal to 0

This error is raised when files_per_request is defined in collector_definitions.yaml but is less than 0.

This is an internal issue. Contact with Devo Support team.

8

"kognos_categorization" setting has not been found as key of kognos_categorization. This setting is mandatory in collector_definitions. Execution aborted.

This error is raised when kognos_categorization property is not found in collector_definitions.yaml.

This is an internal issue. Contact with Devo Support team.

9

"kognos_categorization" setting should be a bool instance not kognos_categorization. Execution aborted.

This error is raised when kognos_categorization is defined in collector_definitions.yaml but the format is not bool

This is an internal issue. Contact with Devo Support team.

11

"kognos_alerts_tag" setting has not been found as key of kognos_alerts_tag. This setting is mandatory in collector_definitions. Execution aborted.

This error is raised when kognos_alerts_tag property is not found in collector_definitions.yaml.

This is an internal issue. Contact with Devo Support team.

12

"kognos_alerts_tag" setting should be a str instance not kognos_alerts_tag. Execution aborted.

This error is raised when kognos_alerts_tag is defined in collector_definitions.yaml but the format is not str

This is an internal issue. Contact with Devo Support team.

14

"kognos_events_tag" setting has not been found as key of kognos_events_tag. This setting is mandatory in collector_definitions. Execution aborted.

This error is raised when kognos_events_tag property is not found in collector_definitions.yaml.

This is an internal issue. Contact with Devo Support team.

16

"kognos_events_tag" setting should be a str instance not kognos_events_tag. Execution aborted.

This error is raised when kognos_events_tag is defined in collector_definitions.yaml but the format is not str

This is an internal issue. Contact with Devo Support team.

17

"credentials" setting from "input_config" has not been found in configuration file. This setting is mandatory. Execution aborted.

This error is raised when the required property credentials is not found in the configuration file.

Add credentials dictionary in the configuration file, including client_id and client_secret fields.

18

"credentials" setting from "input_config" section in configuration file should be a <dict> instance not <{type(credentials)}>. Execution aborted.

This error is raised when credentials is defined in theconfiguration file but the format is not dict.

Edit the value of credentials in the configuration file so it is of type dict.

19

"aws_accesskey" setting has not been found as key of aws_accesskey. This setting is mandatory in configuration file. Execution aborted.

This error is raised when the required property aws_accesskey is not found in the configuration file, into credentials dictionary.

Add aws_accesskey property in the configuration file, into credentials dictionary.

20

"aws_accesskey" setting should be a str instance not aws_accesskey. Execution aborted.

This error is raised when aws_accesskey is defined in the configuration file but the format is not str.

Edit the value of aws_accesskey in the configuration file, into credentials dictionary, so it is of type str.

21

"aws_secretkey" setting has not been found as key of aws_secretkey. This setting is mandatory in configuration file. Execution aborted.

This error is raised when the required property aws_secretkey is not found in the configuration file, into credentials dictionary.

Add aws_secretkey property in the configuration file, into credentials dictionary.

22

"aws_secretkey" setting should be a str instance not aws_secretkey. Execution aborted.

This error is raised when aws_secretkey is defined in theconfiguration file but the format is not str.

Edit the value of aws_secretkey in theconfiguration file, into credentials dictionary, so it is of type str.

23

"org_key" setting has not been found as key of org_key. This setting is mandatory in configuration file. Execution aborted.

This error is raised when the required property org_key is not found in the configuration file, into credentials dictionary.

Add org_key property in the configuration file, into credentials dictionary.

24

"org_key" setting should be a str instance not org_key. Execution aborted.

This error is raised when org_key is defined in the configuration file but the format is not str.

Edit the value of org_key in the configuration file, into credentials dictionary, so it is of type str.

25

"services" setting from "input_config" has not been found in configuration file. This setting is mandatory. Execution aborted.

This error is raised when the required property services is not found in the configuration file.

Add services dictionary in the configuration file.

26

"services" setting from "input_config" section in {user_config} file should be a <dict> instance 'f'not <{type(services)}>. Execution aborted.

This error is raised when services is defined in the configuration file but the format is not dict.

Edit the value of services in the configuration file so it is of type dict.

27

override_files_per_request" setting from "services" section in {user_config} file should be a <int> instance not <{type(override_files_per_request)}>. Execution aborted.

This error is raised when optional value override_files_per_request added in the configuration file is not of type int.

Edit the value of override_files_per_request in the configuration file so it is of type int.

28

"override_devo_tag" setting from "services" section in {user_config} file should be a <str> instance not <{type(override_devo_tag)}>. Execution aborted.

This error is raised when optional value override_devo_tag added in the configuration file is not of type str.

Edit the value of override_devo_tag in the configuration file so it is of type str.

29

"override_kognos_categorization" setting from "services" section in {service_config} file should be a <bool> instance not <{type(override_devo_tag)}>. Execution aborted.

This error is raised when optional value override_kognos_categorization added in the configuration file is not of type str.

Edit the value of override_kognos_categorization in theconfiguration file so it is of type str.

30

"aws_region" setting should be a str instance not aws_region. Execution aborted.

This error is raised when the required property aws_region is not found in the configuration file.

Add aws_region property in config.json..

31

"aws_region" setting has not been found as key of aws_region. This setting is mandatory in configuration file. Execution aborted.

This error is raised when aws_region is defined in the configuration file but the format is not str.

Edit the value of aws_region in the configuration fileso it is of type str.

32

"bucket_name" setting should be a str instance not bucket_name. Execution aborted.

This error is raised when the required property bucket_name is not found in the configuration file.

Add bucket_name property in the configuration file.

33

"bucket_name" setting has not been found as key of bucket_name. This setting is mandatory in configuration file. Execution aborted.

This error is raised when bucket_name is defined in the configuration file but the format is not str.

Edit the value of bucket_name in theconfiguration fileso it is of type str.

34

"queue_name" setting should be a str instance not queue_name. Execution aborted.

This error is raised when the required property queue_name is not found in the configuration file.

Add queue_name property in config.json..

35

"queue_name" setting has not been found as key of queue_name. This setting is mandatory in configuration file. Execution aborted.

This error is raised when queue_name is defined in theconfiguration file but the format is not str.

Edit the value of queue_name in the configuration fileso it is of type str.

SetupError

104

There was an error reaching the AWS server.

This error is raised when an error occurred connecting to the AWS server.

Check that the internet connection is working properly. If the problem persists contact with Devo Support team.

105

<error_message>

This error is raised when a 401 API code is raised. If you've just logged in and received the 401 unauthorized error, it means that the credentials you entered were invalid for some reason

Check that the credentials are correct. If the problem persists contact with Devo Support team.

106

<error_message>

This error is raised when an unknown HTTP error occurs.

Contact with Devo Support team.

107

The desired bucket has not been configured to send its events to <queue_name> Queue. Please configure this in the Bucket options before running the collector.

This error is raised when the s3 bucket is not configured to send events to the AWS queue.

Configure the s3 bucket not to send events to the AWS queue.

108

<error_message>

This error is raised when a 413 API code is raised, it occurs when the size of a client's request exceeds the server's file size limit.

Contact with Devo Support team.

109

There are no folders in the S3 bucket that match the org key provided.

This error is raised when the org_key does not match the key of the queue files.

Set a correct org_key in the org_key parameter of the configuration file.

110

Unable to get QueueConfigurations from bucket. Please configure the S3 trigger by selecting the S3 bucket you created earlier'

This error is raised when S3 trigger is not configured to receive bucket configurations.

Configure the S3 trigger to activate QueueConfigurations.

Collector operations

This section is intended to explain how to proceed with specific operations of this collector.

Initialization

The initialization module is in charge of setup and running the input (pulling logic) and output (delivering logic) services and validating the given configuration.

A successful run has the following output messages for the initializer module:

Events delivery and Devo ingestion

The event delivery module is in charge of receiving the events from the internal queues where all events are injected by the pullers and delivering them using the selected compatible delivery method.

A successful run has the following output messages for the initializer module:

Sender services

The Integrations Factory Collector SDK has 3 different senders services depending on the event type to delivery (internal, standard, and lookup). This collector uses the following Sender Services:

Sender services

Description

Sender services

Description

internal_senders

In charge of delivering internal metrics to Devo such as logging traces or metrics.

standard_senders

In charge of delivering pulled events to Devo.

Sender statistics

Each service displays its own performance statistics that allow checking how many events have been delivered to Devo by type:

Logging trace

Description

Logging trace

Description

Number of available senders: 1

Displays the number of concurrent senders available for the given Sender Service.

sender manager internal queue size: 0

Displays the items available in the internal sender queue.

Total number of messages sent: 44, messages sent since "2022-06-28 10:39:22.511671+00:00": 21 (elapsed 0.007 seconds)

Displayes the number of events from the last time and following the given example, the following conclusions can be obtained:

  • 44 events were sent to Devo since the collector started.

  • The last checkpoint timestamp was 2022-06-28 10:39:22.511671+00:00.

  • 21 events where sent to Devo between the last UTC checkpoint and now.

  • Those 21 events required 0.007 seconds to be delivered.

To check the memory usage of this collector, look for the following log records in the collector which are displayed every 5 minutes by default, always after running the memory-free process.

  • The used memory is displayed by running processes and the sum of both values will give the total used memory for the collector.

  • The global pressure of the available memory is displayed in the global value.

  • All metrics (Global, RSS, VMS) include the value before freeing and after previous -> after freeing memory

Change log for v1.x.x

Release

Released on

Release type

Details

Recommendations

Release

Released on

Release type

Details

Recommendations

v1.0.0

Sep 23, 2022

NEW FEATURE

New features:

  • CBC Event Forwarder ingestion through S3+SQS (AWS Platform)

  • Two ways tag mapping:

    • Grouping by events and alerts (compatible with Kognos data feed requirements)

      • endpoint.vmware.cbc_event_forwarder.kognos_alerts

      • endpoint.vmware.cbc_event_forwarder.kognos_events

    • Grouping by event type:

      • endpoint.vmware.cbc_event_forwarder

      • endpoint.vmware.cbc_event_forwarder.{type}

Recommended version