We use a piece of software called Collector Server to host and manage all our available collectors.
To enable the collector for a customer:
In the Collector ServerGUI, access the domain in which you want this instance to be created
Click Add Collector and find the one you wish to add.
In the Version field, select the latest value.
In the Collector Name field, set the value you prefer (this name must be unique inside the same Collector Server domain).
In the sending method select Direct Send. Direct Send configuration is optional for collectors that create Table events, but mandatory for those that create Lookups.
In the Parameters section, establish the Collector Parameters as follows below:
All defined service entities will be executed by the collector. If you do not want to run any of them, just remove the entity from the services object.
Note
Please replace the placeholders with real world values following the description table below
Parameter
Data type
Type
Value range / Format
Details
debug_status
bool
Mandatory
false / true
If the value is true, the debug logging traces will be enabled when running the collector. If the value is false, only the info, warning and error logging levels will be printed.
short_unique_id
int
Mandatory
Minimum length: 1 Maximum length: 5
Use this param to give an unique id to this input service.
Note
This parameter is used to build the persistence address, do not use the same value for multiple collectors. It could cause a collision.
enabled
bool
Mandatory
false / true
Use this param to enable or disable the given input logic when running the collector. If the value is true, the input will be run. If the value is false, it will be ignored.
base_url
str
Mandatory
By default, the base url is https://sqs.region.amazonaws.com/account-number/queue-name. This needs to be set to the url of sqs.
aws_access_key_id
str
Mandatory/Optional
Any
Only needed if not using cross account
aws_secret_access_key
str
Mandatory/Optional
Any
Only needed if not using cross account
aws_base_account_role
str
Mandatory/Optional
Any
Only needed if using cross account This is devos cross account role
aws_cross_account_role
str
Mandatory/Optional
Any
Only needed if using cross account This is your cross account role
aws_external_id
str
Optional
Any
Extra security you can set up
ack_messages
bool
Manatory
false / true
Needs to be set to true to delete messages from the queue. Leave false until testing complete
direct_mode
bool
Optional
false / true
Set to False for most all scenarios.
This parameter should be removed if it is not used.
do_not_send
bool
Optional
false / true
Set to True to not send the log to Devo.
This parameter should be removed if it is not used.
debug_md5
bool
Optional
false / true
Set to True to will send the message md5 to my.app.sqs.message_body only needed for more debugging on duplicates.
This parameter should be removed if it is not used.
sqs_visibility_timeout
int
Mandatory
Min: 120
Max: 43200 (haven’t needed to test higher)
Set this parameter for timeouts between the queue and the collector, the collector has to download large files and process them. If this process is broken up the time. Otherwise defaults to 120This parameter specifies how long the object will be held by the collector. If it is not processed and deleted within the allotted time in seconds. The message will be put back and can be processed again.
Set this parameter for timeouts between the queue and the collector, the collector has to download large files and process them. Otherwise defaults to 120. For Crowdstrike FDR some messages can take 10-15 minutes to process please set the timeout to help duplicate reduction.
sqs_wait_timeout
int
Mandatory
Min: 20
Max: 20
The min has handled most customer scenarios at this point.
sqs_max_messages
int
Mandatory
Min: 1
Max: 6
This is This is how long polling works. It will wait per poll the value of seconds listed. If no message is found, it will return Long poll did not find any messages in queue. All data in the SQS queue has been successfully collected.
sqs_max_messages
int
Mandatory
Min: 1
Max: 6
This is now 1 always and forever.
region
str
Mandatory
Example:
us-east-1
This is the region that is in the base url
compressed_events
bool
Mandatory
This needs to be true or False
Only works with GZIP compression should be false unless you see this below.
If you see any errors ‘utf-8' codec can't decode byte 0xa9 in position 36561456: invalid start byte it might be the events need to be decompressed
encoding
rw-tab
str
title
On-premise collector
This data collector can be run in any machine that has the Docker service available
Optional
This parameter means the way the log files are encoded inside the s3 bucket.
Options from most used to least used.
gzip
none
parquet
latin-1
Note
It can accept any other string like ascii or utf-16. It is just trying to read the file format.
Rw tab
title
On-premise collector
This data collector can be run in any machine that has the Docker service available because it should be executed as a docker container. The following sections explain how to prepare all the required setup for having the data collector running.
Structure
The following directory structure should be created for being used when running the collector:
In Devo, go to Administration → Credentials → X.509 Certificates, download the Certificate, Private key and Chain CA and save them in <product_name>/certs/. Learn more about security credentials in Devo here.
All defined service entities will be executed by the collector. If you do not want to run any of them, just remove the entity from the services object.
Replace the placeholders with your required values following the description table below:
Parameter
Data type
Type
Value range
Details
debug_status
bool
Mandatory
false / true
If the value is true, the debug logging traces will be enabled when running the collector. If the value is false, only the info, warning and error logging levels will be printed.
collector_id
int
Mandatory
Minimum length: 1 Maximum length: 5
Use this param to give an unique id to this collector.
collector_name
str
Mandatory
Minimum length: 1 Maximum length: 10
Use this param to give a valid name to this collector.
devo_address
str
Mandatory
collector-us.devo.io collector-eu.devo.io
Use this param to identify the Devo Cloud where the events will be sent.
chain_filename
str
Mandatory
Minimum length: 4 Maximum length: 20
Use this param to identify the chain.cert file downloaded from your Devo domain. Usually this file's name is: chain.crt
cert_filename
str
Mandatory
Minimum length: 4 Maximum length: 20
Use this param to identify the file.cert downloaded from your Devo domain.
key_filename
str
Mandatory
Minimum length: 4 Maximum length: 20
Use this param to identify the file.key downloaded from your Devo domain.
short_unique_id
int
Mandatory
Minimum length: 1 Maximum length: 5
Use this param to give an unique id to this input service.
Note
This parameter is used to build the persistence address, do not use the same value for multiple collectors. It could cause a collision.
input_status
bool
Mandatory
false / true
Use this param to enable or disable the given input logic when running the collector. If the value is true, the input will be run. If the value is false, it will be ignored.
base_url
str
Mandatory
By default, the base url is https://sqs.region.amazonaws.com/account-number/queue-name. This needs to be set to the url of sqs.
aws_access_key_id
str
Mandatory/Optional
Any
Only needed if not using cross account
aws_secret_access_key
str
Mandatory/Optional
Any
Only needed if not using cross account
aws_base_account_role
str
Mandatory/Optional
Any
Only needed if using cross account This is devos cross account role
aws_cross_account_role
str
Mandatory/Optional
Any
Only needed if using cross account This is your cross account role
aws_external_id
str
Optional
Any
Extra security you can set up
ack_messages
bool
Manatory
false / true
Needs to be set to true to delete messages from the queue. Leave false until testing complete
direct_mode
bool
Optional
false / true
Set to False for most all scenarios.
This parameter should be removed if it is not used.
do_not_send
bool
Optional
false / true
Set to True to not send the log to Devo.
This parameter should be removed if it is not used.
debugsqs_visibility_md5timeout
boolint
Optional
false / true
Set to True to will send the message md5 to my.app.sqs.message_body only needed for more debugging on duplicates.
This parameter should be removed if it is not used.
sqs_visibility_timeout
int
Mandatory
Min: 120
Max: 43200 (haven’t needed to test higher)
Mandatory
Min: 120
Max: 43200 (haven’t needed to test higher)
This parameter specifies how long the object will be held by the collector. If it is not processed and deleted within the allotted time in seconds. The message will be put back and can be processed again.
Set this parameter for timeouts between the queue and the collector, the collector has to download large files and process them. If this process is broken up the time. Otherwise defaults to 120Otherwise defaults to 120. For Crowdstrike FDR some messages can take 10-15 minutes to process please set the timeout to help duplicate reduction.
sqs_wait_timeout
int
Mandatory
Min: 20
Max: 20The
min has handled most customer scenarios at this pointThis is how long polling works. It will wait per poll the value of seconds listed. If no message is found, it will return Long poll did not find any messages in queue. All data in the SQS queue has been successfully collected.
sqs_max_messages
int
Mandatory
Min: 1
Max: 6
This is now 1 always and forever.
region
str
Mandatory
Example:
us-east-1
This is the region that is in the base url
compressed_events
bool
Mandatory
This needs to be true or False
Only works with GZIP compression should be false unless you see this below.
If you see any errors ‘utf-8' codec can't decode byte 0xa9 in position 36561456: invalid start byte it might be the events need to be decompressed
Download the Docker image
The collector should
encoding
str
Optional
This parameter means the way the log files are encoded inside the s3 bucket.
Options from most used to least used.
gzip
none
parquet
latin-1
Note
It can accept any other string like ascii or utf-16. It is just trying to read the file format.
Download the Docker image
The collector should be deployed as a Docker container. Download the Docker image of the collector as a .tgz file by clicking the link in the following table:
Once the Docker image is imported, it will show the real name of the Docker image (including version info). Replace <image_file> and <version> with a proper value.
The Docker image can be deployed on the following services:
Docker
Execute the following command on the root directory <any_directory>/devo-collectors/<product_name>/
Replace <product_name>, <image_name> and <version> with the proper values.
Docker Compose
The following Docker Compose file can be used to execute the Docker container. It must be created in the <any_directory>/devo-collectors/<product_name>/ directory.
To run the container using docker-compose, execute the following command from the <any_directory>/devo-collectors/<product_name>/ directory:
Code Block
IMAGE_VERSION=<version> docker-compose up -d
Note
Replace <product_name>, <image_name> and <version> with the proper values.
...
To enable this option you just need to edit the configuration file and change the debug_status parameter from false to true and restart the collector.
To disable this option, you just need to update the configuration file and change the debug_status parameter from true to false and restart the collector.
For more information, visit the configuration and parameterization section corresponding to the chosen deployment mode.
Change log
...
Release
...
Released on
...
Release type
...
Details
...
the debug_status parameter from true to false and restart the collector.
For more information, visit the configuration and parameterization section corresponding to the chosen deployment mode.
Change log
Release
Released on
Release type
Details
Recommendations
v1.7.0
Status
colour
Red
title
Bug Fixes
Status
colour
Blue
title
FEATURES
Bug Fixes
Fixed control tower issue
Fixed bug with Falcon Data Replicator Large where logs were taking over an hour to finish
Features
Created custom tagging off of record field mapping
Created NLB logging service
Added INFO/DEBUG logging around each method so users can see size and timing.
Recommended Version
v1.6.4
Status
colour
Red
title
Bug Fixes
Status
colour
Green
title
Improvements
Features
Created custom tagging off of record field mapping
Added INF0/DEBUG logging around most methods so users can see size and timing.
Bug Fixes
Fixed Dependency Issue.
Fixed control tower issue
Fixed Falcon Data Replicator Large where logs were taking over an hour to finish.
Upgrade
v1.6.3
Status
colour
Red
title
Bug Fixes
Bug Fixes
Fixed Log Operations Bug
Added Backwards compatibility to control tower
Fixed Palo Alto Service for snappy decompression.
Upgrade
v1.6.2
Status
colour
Red
title
Bug Fixes
Bug Fixes
None type causing message processing to fail fdr_large, fixed.
Added default region to initialization of sts client to prevent needing environment variables in the green cluster.
Fixed bug in control tower processor
Upgrade
v1.6.1
Status
colour
Green
title
IMPROVEMENTS
Improvements
Created new processor for extracting a message from singular log
Recommended version
Upgrade
v1.6.0
Status
colour
Red
title
BUG FIXES
Status
colour
Green
title
IMPROVEMENTS
Improvements
Increased DCSDK to 1.12.2 to 1.12.4
Removed Multithreading
Added a setup method
Removed Deduplication
Added debugging logging for using dynamic filenames to help with creating dynamic tags
Bug fixes
Fixed a bug where the message body was a string and caused a type error.
Fixed a bug where client was not refreshed in time before acknowledging a message.
Upgrade
v1.5.1
Status
colour
Red
title
BUG FIXES
Bug fixes
Fixed dependency issue
Upgrade
v1.5.0
Status
colour
Red
title
BUG FIXES
Status
colour
Green
title
IMPROVEMENTS
Feature
Removed debug_md5 and made it default for all dictionary logs
Created a new vpc flow processor
Added new sender for relay in house + TLS
Added persistence functionality for gzip sending buffer
Added Automatic activation of gzip sending
Improvements
Updated docker image to 1.3.0
Updated DCDSK from 1.11.1 to 1.12.2
Fixed high vulnerability in Docker Image
Upgrade DevoSDK dependency to version v5.4.0
Fixed error in persistence system
Applied changes to make DCSDK compatible with MacOS
Added new sender for relay in house + TLS
Added persistence functionality for gzip sending buffer
Added Automatic activation of gzip sending
Improved behaviour when persistence fails
Upgraded DevoSDK dependency
Fixed console log encoding
Restructured python classes
Improved behaviour with non-utf8 characters
Decreased defaut size value for internal queues (Redis limitation, from 1GiB to 256MiB)
New persistence format/structure (compression in some cases)
Removed dmesg execution (It was invalid for docker execution)
Upgrade
v1.4.0
Status
colour
Red
title
BUG FIXES
Status
colour
Green
title
IMPROVEMENTS
Status
colour
Blue
title
FEATURES
Features
Implemented use of pulling events sent by event bridge
Added more debugging information to be added to events such as: Time the message was sent to queue, times it has been sent to the queue, the bucket, and file name.
Bug fixes
Fixed an import dependency error
Improvements
Upped the visibility timeout to 1 hour by default
Upgrade
v1.3.2
Status
colour
Red
title
BUG FIXES
Bug fixing
Fixed the initialization of the client credentials that was missing the token.
Upgrade
v1.3.1
Status
colour
Red
title
BUG FIXES
Bug fixing
Fixed index out of range error in aws_sqs_fdr_large service
Upgrade
v1.3.0
Status
colour
Blue
title
FEATURES
Features
Fixed logging message saying the message wasn’t acked event though it was
Added use of 1-6 messages back in config
Added multithreading for downloading messages in parallel
Updated the aws_sqs_fdr_large service with a faster downloading method using ijson.
Upgrade
v1.2.3
Status
colour
Blue
title
FEATURES
Features
Updated to orjson for performance qualities.
Upgrade
v1.2.2
Status
colour
Blue
title
FEATURES
Features
Changed processors in handling of the log from str to json dumps
Upgrade
v1.2.1
Status
colour
Blue
title
FEATURES
Features
Added file filtering to the incapsula service
Upgrade
v1.2.0
Status
colour
Green
title
IMPROVEMENTS
Status
colour
Blue
title
FEATURES
Updated to DCSDK 1.11.1
Added extra check for not valid message timestamps
Added extra check for improve the controlled stop
Changed default number for connection retries (now 7)
Fix for Devo connection retries
Upgrade
v1.1.3
Status
colour
Green
title
IMPROVEMENTS
Status
colour
Red
title
BUG FIXES
Status
colour
Blue
title
FEATURES
Bug fixes
Fixed bug in parquet log processing
Fixed the max number of messages and updated the message timeout in flight
Fixed the way access key and secret are used
Improvements
Updated to DCSDK 1.11.0
Features
Added feature to send md5 message to my.app table
Added RDS service to collector defs
Upgrade
v1.0.1
Status
colour
Green
title
IMPROVEMENTS
Status
colour
Red
title
BUG FIXES
Bug fixes
state file fixed
Improvements
using run method, instead of pull to enable long polling.
adding different types of encoding (latin-1)
update collector defs to be objects instead of arrays which was throwing off tagging, and record field mapping.