...
Data source | Security Purpose | Collector service name | Devo table |
---|---|---|---|
Any | The collector can be customized to process any data. Use a custom service only if there is no prebuilt service. | | All |
Cloud Resource Audit |
|
| |
Load Balancer |
|
| |
Load Balancer |
|
| |
DNS |
|
| |
Content Distribution |
|
| |
Content Distribution |
|
| |
AWS Audit |
|
| |
CLOUDTRAIL VIA KINESIS FIREHOSE | AWS Audit |
|
|
Instance Metrics |
|
| |
CLOUDWATCH VPC | Private Cloud Metrics |
|
|
VPC Flow Logs, Cloudtrail, Cloudfront, and/or AWS config logs |
|
| |
deprecated |
|
|
|
Antivirus |
|
| |
Threat Detection |
|
| |
GUARD DUTY VIA KINESIS FIREHOUSE |
|
|
|
Content Delivery |
|
| |
Container and Cloud |
|
| |
Firewall |
|
| |
Domain Name Service |
|
| |
OPERATING SYSTEM | Windows and Unix events |
|
|
Endpoint Detections |
|
| |
S3 Bucket Audit |
|
| |
Private Cloud |
|
| |
Firewall |
|
|
Run the collector
...
Rw tab | ||
---|---|---|
|
We use a piece of software called Collector Server to host and manage all our available collectors.
To enable the collector for a customer:
In the Collector Server GUI, access the domain in which you want this instance to be created
Click Add Collector and find the one you wish to add.
In the Version field, select the latest value.
In the Collector Name field, set the value you prefer (this name must be unique inside the same Collector Server domain).
In the sending method select Direct Send. Direct Send configuration is optional for collectors that create
Table
events, but mandatory for those that createLookups
.In the Parameters section, establish the Collector Parameters as follows below:
Editing the JSON configuration
Code Block |
---|
{
"global_overrides": {
"debug": false
},
"inputs": {
"sqs_collector": {
"id": "12351",
"enabled": true,
"credentials": {
"aws_access_key_id": "",
"aws_secret_access_key": "",
"aws_base_account_role": "arn:aws:iam::837131528613:role/devo-xaccount-cs-role",
"aws_cross_account_role": "",
"aws_external_id": ""
},
"ack_messages": false,
"direct_mode": false,
"do_not_send": false,
"compressed_events": false,
"base_url": "https://us-west-1.queue.amazonaws.com/id/name-of-queue",
"region": "us-west-1",
"sqs_visibility_timeout": 240,
"sqs_wait_timeout": 20,
"sqs_max_messages": 1,
"services": {
"custom_service": {
"file_field_definitions": {},
"filename_filter_rules": [],
"encoding": "gzip",
"send_filtered_out_to_unknown": false,
"file_format": {
"type": "line_split_processor",
"config": {
"json": true
}
},
"record_field_mapping": {
"event_simpleName": {
"keys": [
"event_simpleName"
]
}
},
"routing_template": "edr.crowdstrike.cannon",
"line_filter_rules": [
[
{
"source": "record",
"key": "event_simpleName",
"type": "match",
"value": "EndOfProcess"
}
],
[
{
"source": "record",
"key": "event_simpleName",
"type": "match",
"value": "DeliverLocalFXToCloud"
}
]
]
}
}
}
}
} |
Info |
---|
All defined service entities will be executed by the collector. If you do not want to run any of them, just remove the entity from the |
Note |
---|
Please replace the placeholders with real world values following the description table below |
...
Parameter
...
Data type
...
Type
...
Value range / Format
...
Details
...
debug_status
...
bool
...
Mandatory
...
false
/ true
...
If the value is true
, the debug logging traces will be enabled when running the collector. If the value is false
, only the info
, warning
and error
logging levels will be printed.
...
short_unique_id
...
int
...
Mandatory
...
Minimum length: 1
Maximum length: 5
...
Use this param to give an unique id to this input service.
Note |
---|
This parameter is used to build the persistence address, do not use the same value for multiple collectors. It could cause a collision. |
...
enabled
...
bool
...
Mandatory
...
false
/ true
...
Use this param to enable or disable the given input logic when running the collector. If the value is true
, the input will be run. If the value is false
, it will be ignored.
...
base_url
...
str
...
Mandatory
...
...
By default, the base url is https://sqs.region.amazonaws.com/account-number/queue-name
. This needs to be set to the url of sqs.
...
aws_access_key_id
...
str
...
Mandatory/Optional
...
Any
...
Only needed if not using cross account
...
aws_secret_access_key
...
str
...
Mandatory/Optional
...
Any
...
Only needed if not using cross account
...
aws_base_account_role
...
str
...
Mandatory/Optional
...
Any
...
Only needed if using cross account This is devos cross account role
...
aws_cross_account_role
...
str
...
Mandatory/Optional
...
Any
...
Only needed if using cross account This is your cross account role
...
aws_external_id
...
str
...
Optional
...
Any
...
Extra security you can set up
...
ack_messages
...
bool
...
Manatory
...
false
/ true
...
Needs to be set to true to delete messages from the queue. Leave false until testing complete
...
direct_mode
...
bool
...
Optional
...
false
/ true
...
Set to False for most all scenarios.
This parameter should be removed if it is not used.
...
do_not_send
...
bool
...
Optional
...
false
/ true
...
Set to True to not send the log to Devo.
This parameter should be removed if it is not used.
...
sqs_visibility_timeout
...
int
...
Mandatory
...
Min: 120
Max: 43200 (haven’t needed to test higher)
...
This parameter specifies how long the object will be held by the collector. If it is not processed and deleted within the allotted time in seconds. The message will be put back and can be processed again.
Set this parameter for timeouts between the queue and the collector, the collector has to download large files and process them. Otherwise defaults to 120. For Crowdstrike FDR some messages can take 10-15 minutes to process please set the timeout to help duplicate reduction.
...
sqs_wait_timeout
...
int
...
Mandatory
...
Min: 20
Max: 20
...
This is how long polling works. It will wait per poll the value of seconds listed. If no message is found, it will return Long poll did not find any messages in queue. All data in the SQS queue has been successfully collected.
...
sqs_max_messages
...
int
...
Mandatory
...
Min: 1
Max: 6
...
This is now 1 always and forever.
...
region
...
str
...
Mandatory
...
Example:
us-east-1
...
This is the region that is in the base url
...
compressed_events
...
bool
...
Mandatory
...
This needs to be true or False
...
Only works with GZIP compression should be false unless you see this below.
If you see any errors ‘utf-8' codec can't decode byte 0xa9 in position 36561456: invalid start byte
it might be the events need to be decompressed
...
encoding
...
str
...
Optional
...
...
This parameter means the way the log files are encoded inside the s3 bucket.
Options from most used to least used.
gzip
none
parquet
latin-1
Note
It can accept any other string like ascii or utf-16. It is just trying to read the file format.
Rw tab | ||
---|---|---|
|
This data collector can be run in any machine that has the Docker service available because it should be executed as a docker container. The following sections explain how to prepare all the required setup for having the data collector running.
Structure
The following directory structure should be created for being used when running the collector:
Code Block |
---|
<any_directory>
└── devo-collectors/
└── <product_name>/
├── certs/
│ ├── chain.crt
│ ├── <your_domain>.key
│ └── <your_domain>.crt
├── state/
└── config/
└── config.yaml |
Note |
---|
Replace |
Devo credentials
In Devo, go to Administration → Credentials → X.509 Certificates, download the Certificate, Private key and Chain CA and save them in <product_name>/certs/
. Learn more about security credentials in Devo here.
...
Note |
---|
Replace |
Editing the config.yaml file
Code Block |
---|
globals:
debug: <debug_status>
id: <collector_id>
name: <collector_name>
persistence:
type: filesystem
config:
directory_name: state
multiprocessing: false
queue_max_size_in_mb: 1024
queue_max_size_in_messages: 1000
queue_max_elapsed_time_in_sec: 60
queue_wrap_max_size_in_messages: 100
outputs:
devo_1:
type: devo_platform
config:
address: <devo_address>
port: 443
type: SSL
chain: <chain_filename>
cert: <cert_filename>
key: <key_filename>
inputs:
sqs:
id: 12345
enabled: true
credentials:
aws_access_key_id: password
aws_secret_access_key: secret-access-key
aws_base_account_role: arn:aws:iam::837131528613:role/devo-xaccount-cs-role
aws_cross_account_role: arn:aws:iam::{account-id}:role/{role-name}
aws_external_id: extra_security_optional
region: region
base_url: https://sqs.{region}.amazonaws.com/{account-number}/{queue-name}
sqs_visibility_timeout: 120
sqs_wait_timeout: 20
sqs_max_messages: 4
ack_messages: false
direct_mode: false
do_not_send: false
compressed_events: false
services:
custom_service:
file_field_definitions: {}
filename_filter_rules: []
encoding: gzip
ack_messages: false
file_format:
type: single_json_object_processor
config:
key: Records
record_field_mapping: {}
routing_template: my.app.source1.type1
line_filter_rules: [] |
Info |
---|
All defined service entities will be executed by the collector. If you do not want to run any of them, just remove the entity from the |
Replace the placeholders with your required values following the description table below:
...
Parameter
...
Data type
...
Type
...
Value range
...
Details
...
debug_status
...
bool
...
Mandatory
...
false
/ true
...
If the value is true
, the debug logging traces will be enabled when running the collector. If the value is false
, only the info
, warning
and error
logging levels will be printed.
...
collector_id
...
int
...
Mandatory
...
Minimum length: 1
Maximum length: 5
...
Use this param to give an unique id to this collector.
...
collector_name
...
str
...
Mandatory
...
Minimum length: 1
Maximum length: 10
...
Use this param to give a valid name to this collector.
...
devo_address
...
str
...
Mandatory
...
collector-us.devo.io
collector-eu.devo.io
...
Use this param to identify the Devo Cloud where the events will be sent.
...
chain_filename
...
str
...
Mandatory
...
Minimum length: 4
Maximum length: 20
...
Use this param to identify the chain.cert file downloaded from your Devo domain. Usually this file's name is: chain.crt
...
cert_filename
...
str
...
Mandatory
...
Minimum length: 4
Maximum length: 20
...
Use this param to identify the file.cert
downloaded from your Devo domain.
...
key_filename
...
str
...
Mandatory
...
Minimum length: 4
Maximum length: 20
...
Use this param to identify the file.key
downloaded from your Devo domain.
...
short_unique_id
...
int
...
Mandatory
...
Minimum length: 1
Maximum length: 5
...
Use this param to give an unique id to this input service.
Note |
---|
This parameter is used to build the persistence address, do not use the same value for multiple collectors. It could cause a collision. |
...
input_status
...
bool
...
Mandatory
...
false
/ true
...
Use this param to enable or disable the given input logic when running the collector. If the value is true
, the input will be run. If the value is false
, it will be ignored.
...
base_url
...
str
...
Mandatory
...
...
By default, the base url is https://sqs.region.amazonaws.com/account-number/queue-name
. This needs to be set to the url of sqs.
...
aws_access_key_id
...
str
...
Mandatory/Optional
...
Any
...
Only needed if not using cross account
...
aws_secret_access_key
...
str
...
Mandatory/Optional
...
Any
...
Only needed if not using cross account
...
aws_base_account_role
...
str
...
Mandatory/Optional
...
Any
...
Only needed if using cross account This is devos cross account role
...
aws_cross_account_role
...
str
...
Mandatory/Optional
...
Any
...
Only needed if using cross account This is your cross account role
...
aws_external_id
...
str
...
Optional
...
Any
...
Extra security you can set up
...
ack_messages
...
bool
...
Manatory
...
false
/ true
...
Needs to be set to true to delete messages from the queue. Leave false until testing complete
...
direct_mode
...
bool
...
Optional
...
false
/ true
...
Set to False for most all scenarios.
This parameter should be removed if it is not used.
...
do_not_send
...
bool
...
Optional
...
false
/ true
...
Set to True to not send the log to Devo.
This parameter should be removed if it is not used.
...
sqs_visibility_timeout
...
int
...
Mandatory
...
Min: 120
Max: 43200 (haven’t needed to test higher)
...
This parameter specifies how long the object will be held by the collector. If it is not processed and deleted within the allotted time in seconds. The message will be put back and can be processed again.
Set this parameter for timeouts between the queue and the collector, the collector has to download large files and process them. Otherwise defaults to 120. For Crowdstrike FDR some messages can take 10-15 minutes to process please set the timeout to help duplicate reduction.
...
sqs_wait_timeout
...
int
...
Mandatory
...
Min: 20
Max: 20
...
This is how long polling works. It will wait per poll the value of seconds listed. If no message is found, it will return Long poll did not find any messages in queue. All data in the SQS queue has been successfully collected.
...
sqs_max_messages
...
int
...
Mandatory
...
Min: 1
Max: 6
...
This is now 1 always and forever.
...
region
...
str
...
Mandatory
...
Example:
us-east-1
...
This is the region that is in the base url
...
compressed_events
...
bool
...
Mandatory
...
This needs to be true or False
...
Only works with GZIP compression should be false unless you see this below.
If you see any errors ‘utf-8' codec can't decode byte 0xa9 in position 36561456: invalid start byte
it might be the events need to be decompressed
...
encoding
...
str
...
Optional
...
...
This parameter means the way the log files are encoded inside the s3 bucket.
Options from most used to least used.
gzip
none
parquet
latin-1
Note
It can accept any other string like ascii or utf-16. It is just trying to read the file format.
Download the Docker image
The collector should be deployed as a Docker container. Download the Docker image of the collector as a .tgz file by clicking the link in the following table:
...
Collector Docker image
...
SHA-256 hash
...
collector-aws_sqs_if-docker-image-1.7.0
...
4b75fb4481203b5a416eb9523ef97b5fa09a939f530265b0158f530777398d28
Use the following command to add the Docker image to the system:
Code Block |
---|
gunzip -c <image_file>-<version>.tgz | docker load |
Note |
---|
Once the Docker image is imported, it will show the real name of the Docker image (including version info). Replace |
The Docker image can be deployed on the following services:
Docker
Execute the following command on the root directory <any_directory>/devo-collectors/<product_name>/
Code Block |
---|
docker run
--name collector-<product_name>
--volume $PWD/certs:/devo-collector/certs
--volume $PWD/config:/devo-collector/config
--volume $PWD/state:/devo-collector/state
--env CONFIG_FILE=config.yaml
--rm
--interactive
--tty
<image_name>:<version> |
Note |
---|
Replace |
Docker Compose
The following Docker Compose file can be used to execute the Docker container. It must be created in the <any_directory>/devo-collectors/<product_name>/
directory.
Code Block |
---|
version: '3'
services:
collector-<product_name>:
image: <image_name>:${IMAGE_VERSION:-latest}
container_name: collector-<product_name>
volumes:
- ./certs:/devo-collector/certs
- ./config:/devo-collector/config
- ./credentials:/devo-collector/credentials
- ./state:/devo-collector/state
environment:
- CONFIG_FILE=${CONFIG_FILE:-config.yaml} |
To run the container using docker-compose, execute the following command from the <any_directory>/devo-collectors/<product_name>/
directory:
Code Block |
---|
IMAGE_VERSION=<version> docker-compose up -d |
Note |
---|
Replace |
Verify data collection
Once the collector has been launched, it is important to check if the ingestion is performed in a proper way. To do so, go to the collector’s logs console.
This service has the following components:
...
Component
...
Description
...
Setup
...
The setup module is in charge of authenticating the service and managing the token expiration when needed.
...
Puller
...
The setup module is in charge of pulling the data in a organized way and delivering the events via SDK.
Setup output
A successful run has the following output messages for the setup module:
Code Block |
---|
2024-01-16T12:47:04.044 INFO OutputProcess::MainThread -> Process started
2024-01-16T12:47:04.044 INFO InputProcess::MainThread -> Process Started
2024-01-16T12:47:04.177 INFO InputProcess::MainThread -> InputThread(sqs_collector,12345) - Starting thread (execution_period=60s)
2024-01-16T12:47:04.177 INFO InputProcess::MainThread -> ServiceThread(sqs_collector,12345,aws_sqs_vpc,predefined) - Starting thread (execution_period=60s)
2024-01-16T12:47:04.177 INFO InputProcess::MainThread -> AWSsqsPullerSetup(unknown,sqs_collector#12345,aws_sqs_vpc#predefined) -> Starting thread
2024-01-16T12:47:04.177 INFO InputProcess::MainThread -> AWSsqsPuller(sqs_collector,12345,aws_sqs_vpc,predefined) - Starting thread
2024-01-16T12:47:04.178 WARNING InputProcess::AWSsqsPuller(sqs_collector,12345,aws_sqs_vpc,predefined) -> Waiting until setup will be executed
2024-01-16T12:47:04.191 INFO OutputProcess::MainThread -> ConsoleSender(standard_senders,console_sender_0) -> Starting thread
2024-01-16T12:47:04.191 INFO OutputProcess::MainThread -> ConsoleSenderManagerMonitor(standard_senders,console_1) -> Starting thread (every 300 seconds)
2024-01-16T12:47:04.191 INFO OutputProcess::MainThread -> ConsoleSenderManager(standard_senders,manager,console_1) -> Starting thread
2024-01-16T12:47:04.192 INFO OutputProcess::MainThread -> ConsoleSender(lookup_senders,console_sender_0) -> Starting thread
2024-01-16T12:47:04.192 INFO OutputProcess::ConsoleSenderManager(standard_senders,manager,console_1) -> [EMERGENCY PERSISTENCE SYSTEM] ConsoleSenderManager(standard_senders,manager,console_1) -> Nothing retrieved from the persistence.
2024-01-16T12:47:04.192 INFO OutputProcess::OutputStandardConsumer(standard_senders_consumer_0) -> [EMERGENCY PERSISTENCE SYSTEM] OutputStandardConsumer(standard_senders_consumer_0) -> Nothing retrieved from the persistence.
2024-01-16T12:47:04.192 INFO OutputProcess::MainThread -> ConsoleSenderManagerMonitor(lookup_senders,console_1) -> Starting thread (every 300 seconds)
2024-01-16T12:47:04.192 INFO OutputProcess::MainThread -> ConsoleSenderManager(lookup_senders,manager,console_1) -> Starting thread
2024-01-16T12:47:04.193 INFO OutputProcess::MainThread -> ConsoleSender(internal_senders,console_sender_0) -> Starting thread
2024-01-16T12:47:04.193 INFO OutputProcess::ConsoleSenderManager(lookup_senders,manager,console_1) -> [EMERGENCY PERSISTENCE SYSTEM] ConsoleSenderManager(lookup_senders,manager,console_1) -> Nothing retrieved from the persistence.
2024-01-16T12:47:04.193 INFO OutputProcess::MainThread -> ConsoleSenderManagerMonitor(internal_senders,console_1) -> Starting thread (every 300 seconds)
2024-01-16T12:47:04.193 INFO OutputProcess::MainThread -> ConsoleSenderManager(internal_senders,manager,console_1) -> Starting thread
2024-01-16T12:47:04.193 INFO OutputProcess::OutputLookupConsumer(lookup_senders_consumer_0) -> [EMERGENCY PERSISTENCE SYSTEM] OutputLookupConsumer(lookup_senders_consumer_0) -> Nothing retrieved from the persistence.
2024-01-16T12:47:05.795 INFO InputProcess::AWSsqsPuller(sqs_collector,12345,aws_sqs_vpc,predefined) -> Starting data collection every 5 seconds |
Puller output
A successful initial run has the following output messages for the puller module:
Note that the PrePull
action is executed only one time before the first run of the Pull
action.
Code Block |
---|
I2024-01-16T17:02:56.221036303Z 2024-01-16T17:02:56.220 INFO InputProcess::AWSsqsPuller(sqs_collector,12345,aws_sqs_cloudwatch_vpc,predefined) -> Acked message receiptHandle: /+qA+ymL2Vs8yb//++7YM2Ef8BCetrJ+/+////F1uwLOVfONfagI99vA=
2024-01-16T17:02:56.221386926Z 2024-01-16T17:02:56.221 INFO InputProcess::AWSsqsPuller(sqs_collector,12345,aws_sqs_cloudwatch_vpc,predefined) -> Data collection completed. Elapsed time: 2.413 seconds. Waiting for 2.587 second(s) until the next one |
Restart the persistence
This collector uses persistent storage to download events in an orderly fashion and avoid duplicates. In case you want to re-ingest historical data or recreate the persistence, you can restart the persistence of this collector by following these steps:
Delete and Re-DO the collector with new ID number
The collector will detect this change and will restart the persistence using the parameters of the configuration file or the default configuration in case it has not been provided.
Note |
---|
Note that this action clears the persistence and cannot be recovered in any way. Resetting persistence could result in duplicate or lost events. |
Collector operations
This section is intended to explain how to proceed with specific operations of this collector.
Verify collector operations
The initialization module is in charge of setup and running the input (pulling logic) and output (delivering logic) services and validating the given configuration.
Events delivery and Devo ingestion
The event delivery module is in charge of receiving the events from the internal queues where all events are injected by the pullers and delivering them using the selected compatible delivery method.
A successful run has the following output messages for the initializer module:
Code Block |
---|
INFO OutputProcess::SyslogSenderManagerMonitor(standard_senders,sidecar_0) -> Number of available senders: 1, sender manager internal queue size: 0
INFO OutputProcess::SyslogSenderManagerMonitor(standard_senders,sidecar_0) -> enqueued_elapsed_times_in_seconds_stats: {}
INFO OutputProcess::SyslogSenderManagerMonitor(standard_senders,sidecar_0) -> Sender: SyslogSender(standard_senders,syslog_sender_0), status: {"internal_queue_size": 0, "is_connection_open": True}
INFO OutputProcess::SyslogSenderManagerMonitor(standard_senders,sidecar_0) -> Standard - Total number of messages sent: 44, messages sent since "2022-06-28 10:39:22.511671+00:00": 44 (elapsed 0.007 seconds)
INFO OutputProcess::SyslogSenderManagerMonitor(internal_senders,sidecar_0) -> Number of available senders: 1, sender manager internal queue size: 0
INFO OutputProcess::SyslogSenderManagerMonitor(internal_senders,sidecar_0) -> enqueued_elapsed_times_in_seconds_stats: {}
INFO OutputProcess::SyslogSenderManagerMonitor(internal_senders,sidecar_0) -> Sender: SyslogSender(internal_senders,syslog_sender_0), status: {"internal_queue_size": 0, "is_connection_open": True}
INFO OutputProcess::SyslogSenderManagerMonitor(internal_senders,sidecar_0) -> Internal - Total number of messages sent: 1, messages sent since "2022-06-28 10:39:22.516313+00:00": 1 (elapsed 0.019 seconds) |
Sender services
The Integrations Factory Collector SDK has 3 different senders services depending on the event type to delivery (internal
, standard
, and lookup
). This collector uses the following Sender Services:
...
Sender Services
...
Description
...
internal_senders
...
In charge of delivering internal metrics to Devo such as logging traces or metrics.
...
standard_senders
...
In charge of delivering pulled events to Devo.
Sender statistics
Each service displays its own performance statistics that allow checking how many events have been delivered to Devo by type:
...
Logging trace
...
Description
...
Number of available senders: 1
...
Displays the number of concurrent senders available for the given Sender Service.
...
sender manager internal queue size: 0
...
Displays the items available in the internal sender queue.
This value helps detect bottlenecks and needs to increase the performance of data delivery to Devo. This last can be made by increasing the concurrent senders.
...
Total number of messages sent: 44, messages sent since "2022-06-28 10:39:22.511671+00:00": 21 (elapsed 0.007 seconds)
...
Displayes the number of events from the last time and following the given example, the following conclusions can be obtained:
44 events were sent to Devo since the collector started.
The last checkpoint timestamp was
2022-06-28 10:39:22.511671+00:00
.21 events where sent to Devo between the last UTC checkpoint and now.
Those 21 events required
0.007 seconds
to be delivered.
By default these traces will be shown every 10 minutes.
Check memory usage
To check the memory usage of this collector, look for the following log records in the collector which are displayed every 5 minutes by default, always after running the memory free process.
The used memory is displayed by running processes and the sum of both values will give the total used memory for the collector.
The global pressure of the available memory is displayed in the
global
value.All metrics (Global, RSS, VMS) include the value before freeing and after:
previous -> after freeing memory
Code Block |
---|
INFO InputProcess::MainThread -> [GC] global: 20.4% -> 20.4%, process: RSS(34.50MiB -> 34.08MiB), VMS(410.52MiB -> 410.02MiB)
INFO OutputProcess::MainThread -> [GC] global: 20.4% -> 20.4%, process: RSS(28.41MiB -> 28.41MiB), VMS(705.28MiB -> 705.28MiB) |
Differences between RSS
and VMS
memory usage:
RSS
is the Resident Set Size, which is the actual physical memory the process is usingVMS
is the Virtual Memory Size which is the virtual memory that process is using
Enable/disable the logging debug mode
Sometimes it is necessary to activate the debug mode of the collector's logging. This debug mode increases the verbosity of the log and allows you to print execution traces that are very helpful in resolving incidents or detecting bottlenecks in heavy download processes.
To enable this option you just need to edit the configuration file and change the debug_status parameter from false to true and restart the collector.
To disable this option, you just need to update the configuration file and change the debug_status parameter from true to false and restart the collector.
For more information, visit the configuration and parameterization section corresponding to the chosen deployment mode.
Change log
...
Release
...
Released on
...
Release type
...
Details
...
Recommendations
...
v1.7.0
...
...
Status | ||||
---|---|---|---|---|
|
Status | ||||
---|---|---|---|---|
|
...
Bug Fixes
Fixed control tower issue
Fixed bug with Falcon Data Replicator Large where logs were taking over an hour to finish
Features
Created custom tagging off of record field mapping
Created NLB logging service
Added INFO/DEBUG logging around each method so users can see size and timing.
...
Recommended Version
...
v1.6.4
...
Status | ||||
---|---|---|---|---|
|
Status | ||||
---|---|---|---|---|
|
...
Features
Created custom tagging off of record field mapping
Added INF0/DEBUG logging around most methods so users can see size and timing.
Bug Fixes
Fixed Dependency Issue.
Fixed control tower issue
Fixed Falcon Data Replicator Large where logs were taking over an hour to finish.
...
Upgrade
...
v1.6.3
...
...
Status | ||||
---|---|---|---|---|
|
...
Bug Fixes
Fixed Log Operations Bug
Added Backwards compatibility to control tower
Fixed Palo Alto Service for snappy decompression.
...
Upgrade
...
v1.6.2
...
...
Status | ||||
---|---|---|---|---|
|
...
Bug Fixes
None type causing message processing to fail fdr_large, fixed.
Added default region to initialization of sts client to prevent needing environment variables in the green cluster.
Fixed bug in control tower processor
...
Upgrade
...
v1.6.1
...
...
Status | ||||
---|---|---|---|---|
|
...
Improvements
Created new processor for extracting a message from singular log
...
Upgrade
...
v1.6.0
...
...
Status | ||||
---|---|---|---|---|
|
Status | ||||
---|---|---|---|---|
|
...
Improvements
Increased DCSDK to 1.12.2 to 1.12.4
Removed Multithreading
Added a setup method
Removed Deduplication
Added debugging logging for using dynamic filenames to help with creating dynamic tags
Bug fixes
Fixed a bug where the message body was a string and caused a type error.
Fixed a bug where client was not refreshed in time before acknowledging a message.
...
Upgrade
...
v1.5.1
...
...
Status | ||||
---|---|---|---|---|
|
...
Bug fixes
Fixed dependency issue
...
Upgrade
...
v1.5.0
...
...
Status | ||||
---|---|---|---|---|
|
Status | ||||
---|---|---|---|---|
|
...
Feature
Removed debug_md5 and made it default for all dictionary logs
Created a new vpc flow processor
Added new sender for relay in house + TLS
Added persistence functionality for gzip sending buffer
Added Automatic activation of gzip sending
Improvements
Updated docker image to 1.3.0
Updated DCDSK from 1.11.1 to 1.12.2
Fixed high vulnerability in Docker Image
Upgrade DevoSDK dependency to version v5.4.0
Fixed error in persistence system
Applied changes to make DCSDK compatible with MacOS
Added new sender for relay in house + TLS
Added persistence functionality for gzip sending buffer
Added Automatic activation of gzip sending
Improved behaviour when persistence fails
Upgraded DevoSDK dependency
Fixed console log encoding
Restructured python classes
Improved behaviour with non-utf8 characters
Decreased defaut size value for internal queues (Redis limitation, from 1GiB to 256MiB)
New persistence format/structure (compression in some cases)
Removed dmesg execution (It was invalid for docker execution)
...
Upgrade
...
v1.4.0
...
...
Status | ||||
---|---|---|---|---|
|
Status | ||||
---|---|---|---|---|
|
Status | ||||
---|---|---|---|---|
|
...
Features
Implemented use of pulling events sent by event bridge
Added more debugging information to be added to events such as: Time the message was sent to queue, times it has been sent to the queue, the bucket, and file name.
Bug fixes
Fixed an import dependency error
Improvements
Upped the visibility timeout to 1 hour by default
...
Upgrade
...
v1.3.2
...
...
Status | ||||
---|---|---|---|---|
|
...
Bug fixing
Fixed the initialization of the client credentials that was missing the token.
...
Upgrade
...
v1.3.1
...
...
Status | ||||
---|---|---|---|---|
|
...
Bug fixing
Fixed index out of range error in
aws_sqs_fdr_large
service
...
Upgrade
...
v1.3.0
...
...
Status | ||||
---|---|---|---|---|
|
...
Features
Fixed logging message saying the message wasn’t acked event though it was
Added use of 1-6 messages back in config
Added multithreading for downloading messages in parallel
Updated the
aws_sqs_fdr_large
service with a faster downloading method using ijson.
...
Upgrade
...
v1.2.3
...
...
Status | ||||
---|---|---|---|---|
|
...
Features
Updated to orjson for performance qualities.
...
Upgrade
...
v1.2.2
...
...
Status | ||||
---|---|---|---|---|
|
...
Features
Changed processors in handling of the log from str to json dumps
...
Upgrade
...
v1.2.1
...
...
Status | ||||
---|---|---|---|---|
|
...
Features
Added file filtering to the incapsula service
...
Upgrade
...
v1.2.0
...
...
Status | ||||
---|---|---|---|---|
|
Status | ||||
---|---|---|---|---|
|
...
Updated to DCSDK 1.11.1
Added extra check for not valid message timestamps
Added extra check for improve the controlled stop
Changed default number for connection retries (now 7)
Fix for Devo connection retries
...
Upgrade
...
v1.1.3
...
...
Status | ||||
---|---|---|---|---|
|
Status | ||||
---|---|---|---|---|
|
Status | ||||
---|---|---|---|---|
|
...
Bug fixes
Fixed bug in parquet log processing
Fixed the max number of messages and updated the message timeout in flight
Fixed the way access key and secret are used
Improvements
Updated to DCSDK 1.11.0
Features
Added feature to send md5 message to my.app table
Added RDS service to collector defs
...
Upgrade
...
v1.0.1
...
...
Status | ||||
---|---|---|---|---|
|
Status | ||||
---|---|---|---|---|
|
...
Bug fixes
state file fixed
Improvements
using run method, instead of pull to enable long polling.
adding different types of encoding (latin-1)
update collector defs to be objects instead of arrays which was throwing off tagging, and record field mapping.
...
Upgrade
...
v1.0.0
...
...
Status | ||||
---|---|---|---|---|
|
...
Released with DCSDK 1.10.2
...
Initial version