Devo recommends using the AWS SQS collector instead of the AWS collector. SQS improves ease of use, reliability, and performance.
Overview
Amazon Web Services (AWS) provides on-demand cloud computing platforms and APIs to individual companies. Each available AWS service generates information related to different aspects of its functionality. The available data types include service events, audit events, metrics, and logs.
You can use the AWS collector to retrieve data from the AWS APIs and send it to your Devo domain. Once the gathered information arrives at Devo, it will be processed and included in different tables in the associated Devo domain so users can analyze it.
To run this collector, there are some configurations detailed below that you need to consider:
...
Info
More information
Refer to the Vendor setup section to know more about these configurations.
Amazon Web Services (AWS) provides on-demand cloud computing platforms and APIs to individual companies. Each available AWS service generates information related to different aspects of its functionality. The available data types include service events, audit events, metrics, and logs.
You can use the AWS collector to retrieve data from the AWS APIs and send it to your Devo domain. Once the gathered information arrives at Devo, it will be processed and included in different tables in the associated Devo domain so users can analyze it.
Devo collector features
...
Feature
...
Details
...
Allow parallel downloading (multipod)
...
Not allowed
...
Running environments
...
Collector server
On-premise
...
Populated Devo events
...
Table
...
Flattening preprocessing
...
No
Data sources
...
Data Source
...
Description
...
Devo collector features
Feature
Details
Allow parallel downloading (multipod)
not allowed
Running environments
collector server
on-premise
Populated Devo events
table
Flattening preprocessing
no
Data sources
Data source
Description
API endpoint
Collector service name
Devo table
Available from release
Service events
The different available services in AWS usually generate some information related to their internal behaviors, such as "a virtual machine has been started", "a new file has been created in an S3 bucket" or "an AWS lambda function has been invoked" and this kind of event can be triggered by no human interaction.
The service events are managed by the CloudWatch Events service (CWE), recently AWS has created a new service called Amazon EventBridge that tends to replace the CWE service.
The findings detected by AWS Security Hub are also managed by CloudWatch Events (CWE).
If auto_event_type parameter in config file is not set or set to false: cloud.aws.cloudwatch.events
If auto_event_type parameter in config file is set to true: cloud.aws.cloudwatch.{event_type}
Security Hub events:
cloud.aws.securityhub.findings
-
Audit events
This kind of event is more specific because they are triggered by a human interaction no matter the different ways used: API, web interaction, or even the CLI console.
The audit events are managed by the CloudTrail service.
There are two ways to read Audit events:
API: using CloudTrail API. This way is slower, but it can retrieve data back in time.
S3+SQS: forwarding CloudTrail data to an S3 bucket and reading from there through a SQS queue. This way is much faster, but it only can retrieve elements since the creation of the S3+SQS pipeline.
If auto_event_type parameter in config file is not set or set to false: cloud.aws.cloudtrail.events
If auto_event_type parameter in config file is set to true: cloud.aws.cloudtrail.{event_type}
-
Metrics
According to the standard definition, this kind of information is usually generated at the same moment is requested because it is usually a query about the status of a service (all things inside AWS are considered services).
AWS makes something slightly different because what is doing is to generate metrics information every N time slots, such as 1 min, 5 min, 30 min, 1h, etc., even if no one makes a request (also is possible to have information every X seconds but this would require extra costs).
The metrics are managed by the CloudWatch Metrics service (CWM).
Logs could be defined as information with a non-fixed structure that is sent to one of the available “logging” services, these services are CloudWatch Logs and S3.
There are some very customizable services, such as AWS Lambda, or even any developed application which is deployed inside an AWS virtual machine (EC2), that can generate custom log information, this kind of information is managed by the CloudWatch Logs service (CWL) and also by the S3 service.
There are also some other services that can generate logs with a fixed structure, such as VPC Flow Logs or CloudFront Logs. These kinds of services require one special way of collecting their data.
Managed by Cloudwatch: This is a custom service that is activated using service custom_service and including the type logs into the types parameter in the config file.
Not managed by Cloudwatch: Use non-cloudwatch-logs service and include the required type (flowlogs for VPC Flow Logs and/or cloudfrontlogs for CloudFront Logs) into the types parameter in the config file.
Managed by Cloudwatch: cloud.aws.cloudwatch.logs
Not managed by Cloudwatch:
VPC Flow Logs:
If auto_event_type parameter in config file is set to true: cloud.aws.vpc.unknown
If auto_event_type parameter in config file is set to true: cloud.aws.vpc.{event_type}
CloudFront Logs:
If auto_event_type parameter in config file is set to true: cloud.aws.cloudfront.unknown
If auto_event_type parameter in config file is set to true: cloud.aws.cloudfront.{event_type}
-
Cisco Umbrella [Non-AWS service]
Cisco Umbrella is a cloud-driven Secure Internet Gateway (SIG) that leverages insights gained through the analysis of various logs, including DNS logs, IP logs, and AWS GuardDuty
AWS GuardDuty is a managed threat detection service that continuously monitors for malicious activity and unauthorized behavior to protect your AWS accounts, workloads, and data stored in Amazon S3.
Data Sources: GuardDuty ingests and processes data from AWS CloudTrail logs, VPC Flow Logs, and DNS logs
Findings: When a potential threat is detected, GuardDuty generates a finding. These findings provide details about the activity, including the affected resources, type of threat, and suggested remediation actions.
We are using API to get findings of GuardDuty service.
Cisco Umbrella is a cloud-driven Secure Internet Gateway (SIG) that leverages insights gained through the analysis of various logs, including DNS logs, IP logs, and Proxy logs, to provide a first line of defense.
DNS logs record all DNS queries that are made through the Cisco Umbrella DNS resolvers. These logs contain data about the DNS queries originating from your network, requested domain names and the IP address of the requester.
IP logs capture all IP-based communications that occur through the network. These logs store details such as the source and destination IP addresses, ports and protocols used.
Proxy logs are generated when users access web resources through the Cisco Umbrella intelligent proxy. They contain detailed information on the web traffic including the URL accessed, the method of access (GET, POST, etc.), the response status, etc
There are some minimal requirements to set up this collector.:
AWS console access: Credentials are required to access the AWS console.
Owner or Administrator permissions within the AWS console, or the fill access to configure AWS services.
Some manual actions are necessary in order to get all the required information or services and allow Devo to gather information from AWS. The following sections describe how to get the required AWS credentials and how to proceed with the different required setups depending on the gathered information type.
...
Expand
title
Policy details
Some collector services require the creation of some IAM policies before creating the IAM user that will be used for the AWS collector. The following table contains the details about the policies that could be used by the AWS collector:
Source type
AWS Data Bus
Recommended policy name
Variant
Additional info
Service events
CloudWatch Events
devo-cloudwatch-events
All resources
Tip
It’s not required the creation of any new policy due to there are not needed any permissions
All the previous policies are defined to be AWS region agnostic, which means, that they will be valid for any AWS region.
Expand
title
Using a user account and local policies
Depending on which source types are collected, one or more of the policies described above will be used. Once the required policies are created, each one must be associated with an IAM user. To create it, visit the AWS Console and log in with a user account with enough permissions to create and access AWS structures:
Go to IAM → Users
Click Add users button.
Enter the required value in the filed User name.
Enable the checkbox Access key - Programmatic access.
Click on Next: Tags button.
Click on Next: Review button.
Click on Create user button.
The Access Key ID and Secret Key will show. Click Download.csv button and save it.
Expand
title
Assuming a role (self-account)
It is best practice to assume roles that are granted just the required privileges to perform an action. If the customer does not want to use their own AWS use to perform these actions required by the collector - because it has far more privileges than required - they can use this option. Note that this option requires the use of AWS account credentials. To avoid sharing those credentials, check the Cross Account section below.
Then the customer must attach the required policies to AWS to the role that is going to be assumed.
Go to IAM → Roles.
Click on Create role button.
In the Trusted entity type, select AWS account and then select This account (123456789012).
Add the required policies.
Give a name to the role.
Click on Create role.
You should also add authentication credentials to the configuration. Add the next fields into the configuration:
access_key: This is the Access Key ID provided by AWS during the user creation process.
access_secret: This is the Secret Access Key provided by AWS during the user creation process.
base_assume_role: This is the ARN of the role that is going to be assumed by the user authenticated with the parameters above, access_key and access_secret. This role has to be properly granted to allow the actions that the collector is going to perform.
These fields need to be in the credentials and are required to use this authentication method:
All the previous policies are defined to be AWS region agnostic, which means, that they will be valid for any AWS region.
Expand
title
Assuming a role (cross-account)
In case you don't want to share your credentials with Devo, you should add some parameters to the configuration file. In the credentials section, instead. of sharing access_key and access_secret. Follow these steps to allow this authentication:
Prepare the environment to allow Devo’s cloud collector server to assume roles cross-account.
Add ARNs for each role into the configuration:
base_assume_role: This is the ARN of the role that is going to be assumed by the profile bound to the machine/instance where the collector is running. This role already exists in Devo's AWS account and its value must be: arn:aws:iam::837131528613:role/devo-xaccount-cs-role *
target_assume_role: This is the ARN of the role in the AWS account. This role allows the collector to have access to the resources specified in this role. To keep your data secure, please, use policies that grant just the necessary permissions.
assume_role_external_id : This is an optional parameter to add more security to this Cross Account operation. This value should be a string added to the request to assume the customer’s role.
Note
*New role
If you’re deploying your collector using the Cloud collector app, you should use the following role instead of the one above:
This authentication method has not shared credentials. This fields needs to be in the credentials and are all required, except assume_role_external_id which is optional
Using a user account and local policies
Depending on which source types are collected, one or more of the policies described above will be used. Once the required policies are created, each one must be associated with an IAM user. To create it, visit the AWS Console and log in with a user account with enough permissions to create and access AWS structures:
Go to IAM → Users.
Click Add users button.
Enter the required value in the filed User name.
Enable the checkbox Access key - Programmatic access.
Click on Next: Tags button.
Click on Next: Review button.
Click on Create user button.
The Access Key ID and Secret Key will show. Click Download.csv button and save it.
Expand
title
Assuming a role (self-account)
It is best practice to assume roles that are granted just the required privileges to perform an action. If the customer does not want to use their own AWS use to perform these actions required by the collector - because it has far more privileges than required - they can use this option. Note that this option requires the use of AWS account credentials. To avoid sharing those credentials, check the Cross Account section below.
Then the customer must attach the required policies to AWS to the role that is going to be assumed.
Go to IAM → Roles.
Click on Create role button.
In the Trusted entity type, select AWS account and then select This account (123456789012).
Add the required policies.
Give a name to the role.
Click on Create role.
You should also add authentication credentials to the configuration. Add the next fields into the configuration:
access_key: This is the Access Key ID provided by AWS during the user creation process.
access_secret: This is the Secret Access Key provided by AWS during the user creation process.
base_assume_role: This is the ARN of the role that is going to be assumed by the user authenticated with the parameters above, access_key and access_secret. This role has to be properly granted to allow the actions that the collector is going to perform.
These fields need to be in the credentials and are required to use this authentication method:
Cloudwatch manages all the service events that have been generated on AWS. However, Devo’s AWS Collector offers two different services that collect Cloudwatch Events:
sqs-cloudwatch-consumer: This service is used to collect Security Hub events.
service-events-all: This service is used to collect events from the rest of the services on AWS
Info
Service events
Some previous configurations are required if you want to use any of these services. The AWS services generate service events per region, so the following instructions should be applied in each region where the collecting information is required. There are some structures that you need to create for collecting these service events: FIFO queue in the SWS service and Rule+Target in the CloudWatch service.
If you want to create them manually, click on each one to follow the steps.
Expand
title
SQS FIFO queue creation
Go to Simple Queue Service and click on Create queue.
In the Details section. Choose FIFO queue type and set the name field value you prefer. It must end with .fifo suffix.
In the Configuration section. Set the Message retention period field value to 5 days. Be sure that Content-based deduplication checkbox is marked.
In the Access policy section. Choose method Basic and choose Only the queue owner for receiving and sending permissions.
Optional step. Create one tag with Key usedby and value devo-collector.
Click on Create queue.
Expand
title
EventBridge Rule + Target creation
Go to EventBridge, expand Evvents in the left-menu side and click on Rules.
In the Defined rule detail section, fill the required data and select the Rule type called Rule with an event pattern.
In the Build event pattern section, select All events.*
In the Select Target section, select AWS target as a target type and fill the SQS queue information. In the Message group ID write devo-collector.
Optional step. Configure tags section.
In the Review and create section, just check the different sections and once everything is correct, click on Create rule.
Info
( * )Note for Security Hub
To retrieve Security Hub Findings, in Build event pattern section, select AWS events or EventBridge partner events in Event source. Then, go to Sample events - optional part and select AWS events in Sample event type. In Sample events select Security Hub Findings - Custom Action
Steps to enable Audit Events
No actions are required in Cloudtrail Service for retrieving this kind of information when the API approach is used (setting types as audit_apis).
For the S3+SQS approach (setting types as audits_s3) some previous configuration is required. Find a complete description of how to create an S2 +SQS pipeline here.
Steps to enable Metrics
No actions are required in CloudWatch Metrics service for retrieving this kind of information.
Steps to enable Logs
Logs can be collected from different services. Depending on the type, some previous setups must be applied on AWS:
Expand
title
CloudWatch Logs
No actions are required in this service for retrieving this kind of information.
Expand
title
VPC Flow Logs
Before enabling the generation of these logs some structures must be created: one bucket in the S3 service and one FIFO queue in the SQS service.
Follow the steps to create those structures manually:
Create SQS Stadard queue
Expand
title
Assuming a role (cross-account)
In case you don't want to share your credentials with Devo, you should add some parameters to the configuration file. In the credentials section, instead. of sharing access_key and access_secret. Follow these steps to allow this authentication:
Prepare the environment to allow Devo’s cloud collector server to assume roles cross-account.
Add ARNs for each role into the configuration:
base_assume_role: This is the ARN of the role that is going to be assumed by the profile bound to the machine/instance where the collector is running. This role already exists in Devo's AWS account and its value must be: arn:aws:iam::837131528613:role/devo-xaccount-cs-role *
target_assume_role: This is the ARN of the role in the AWS account. This role allows the collector to have access to the resources specified in this role. To keep your data secure, please, use policies that grant just the necessary permissions.
assume_role_external_id : This is an optional parameter to add more security to this Cross Account operation. This value should be a string added to the request to assume the customer’s role.
Note
*New role
If you’re deploying your collector using the Cloud collector app, you should use the following role instead of the one above:
This authentication method has not shared credentials. This fields needs to be in the credentials and are all required, except assume_role_external_id which is optional:
Cloudwatch manages all the service events that have been generated on AWS. However, Devo’s AWS Collector offers two different services that collect Cloudwatch Events:
sqs-cloudwatch-consumer: This service is used to collect Security Hub events.
service-events-all: This service is used to collect events from the rest of the services on AWS
Info
Service events
Some previous configurations are required if you want to use any of these services. The AWS services generate service events per region, so the following instructions should be applied in each region where the collecting information is required. There are some structures that you need to create for collecting these service events: FIFO queue in the SWS service and Rule+Target in the CloudWatch service.
If you want to create them manually, click on each one to follow the steps.
Expand
title
SQS FIFO queue creation
Go to Simple Queue Service and click on Create queue.
In the Details section. Choose the Standard FIFO queue typeand set the name field value you prefer. It must end with .fifo suffix.
In the Configuration sectionset . Set the Message retention periodfield value to 5 daysand leave the rest of the values from the Configuration section with the default ones.. Be sure that Content-based deduplication checkbox is marked.
In the Access Policypolicy sectionchoose method Advanced and adapt this value Principal: {"AWS·:·<account_id>"}. Choose method Basic and choose Only the queue owner for receiving and sending permissions.
Optional step. In the Tag section create Create one tag with key “usedBy” Key usedbyand value “devodevo-collector”collector.
Click on Create queuebutton.
Create or configure S3 bucket
Expand
title
EventBridge Rule + Target creation
Go to S3EventBridge, expand Evvents in the left-menu side and click on Create bucket buttonRules.
Set the preferred value in the Bucket name field.
Choose any Region value.
Click on the next button.
Optional. Create one tag with Key. “usedBy” and value “devo-collector”.
Leave all values with the default ones and click on the next button.
Click on Create bucket button.
Mark the checkbox next to the previously created S3 Bucket.
Mark the checkbox next to the previously created S3 bucket.
n the popup box click on Copy Bucket ARN button and save the content for being used in the next steps.
In S3 bucket list click on the previously created bucket name link.
Click on the Properties tab.
Click on the Events box.
Click on the Add notification link.
Set the preferred value in Name field.
Mark All object create events checkbox.
In Send to field select SQS Queue as value.
Select the previously created SQS queue in SQS field.
Create Flow Log
Go to VPC service.
Select any available VPC (or create a new one).
Choose Flow Logs tab.
Click on Create flow log button.
Choose the preferred Filter value.
Choose the preferred Maximum aggregation interval value.
Select as Destination field value Send to an S3 bucket.
In S3 bucket ARN field value set the ARN of the previously created S3 bucket (Saved in a previous step).
Be sure that the format field has set the value AWS default format.
Optional. Create one tag with Key "usedBy" and Value "devo-collector"
Click on Create button.
Expand
title
Cloudfront Logs
Before enabling the generation of these logs some structures must be created: one bucket in the S3 service and one FIFO queue in the SQS service.
For the manual creation of these required structures, please follow the next steps:
Create SQS Standard queue
Go to Simple Queue Service and click on Create queue button.
In the Details section choose Standardqueue typ and set the Name field value you prefer.
In the Configuration section set the Message retention period field value to 5 Day and leave the rest values from Configuration section with the default ones.
In the Access policy section choose method Advance and replace "Principal": {"AWS":"<account_id>"} with "Principal": "*" (leave rest of JSON as come).
Optional. In the Tags section create one tag with Key "usedBy" and Value "devo-collector"
Click on Create queue button.
Create or configure S3 bucket
Go to S3 and click on Create bucket button.
Set the preferred value in the Bucket name field.
Choose any Region value.
Click on the next button.
Optional. Create one tag with Key. “usedBy” and value “devo-collector”.
Leave all values with the default ones and click on the next button.
Click on Create bucket button.
Mark the checkbox next to the previously created S3 Bucket.
Mark the checkbox next to the previously created S3 bucket.
n the popup box click on Copy Bucket ARN button and save the content for being used in the next steps.
In S3 bucket list click on the previously created bucket name link.
Click on the Properties tab.
Click on the Events box.
Click on the Add notification link.
Set the preferred value in Name field.
Mark All object create events checkbox.
In Send to field select SQS Queue as value.
Select the previously created SQS queue in SQS field.
Allow Loggin in Cloudfront
Go to Cloudfront service.
Click on ID field link of the target Distribution item (for accessing to Distributing Settings options).
Click on the Edit button.
In Logging choose the value On.
In the Bucket for Logs field value set the ARN of the previously created S3 bucket (Saved in a previous step).
Click on Yes and then click on the Edit button.
Steps to enable Cisco Umbrella logs
...
Action
...
Steps
...
SQS Standard queue creation
...
Go to Simple Queue Service and click Create queue.
In the Details section:
Choose Standard queue type.
Set the Name field value you prefer.
In the Configuration section:
Set the Message retention period field value to 5 Days.
Leave the rest values from Configuration section with the default ones.
In the Access policy section:
Choose method Advanced.
Replace "Principal": {"AWS":"<account_id>"} with "Principal": "*" (leave rest of JSON as come)
(Not mandatory)Tags section:
Create one tag with Key “usedBy“ and Value “devo-collector“
Click on Create queue button.
...
S3 bucket creation/configuration
...
Go to S3 and click on Create bucket button.
Set the preferred value in Bucket name field.
Choose any Region value.
Click the Next button.
(Not mandatory) Create one tag with KeyusedBy and Valuedevo-collector.
Leave rest of fields with default values, click the Next button.
Leave all values with default ones, click the Next button.
Click the Create bucket button.
Mark the checkbox next to the previously created S3 bucket.
In the popup box, click the Copy Bucket ARN button and save the content for being used in the next steps.
In S3 bucket list, click the previously created bucket name link.
Click the Properties tab.
Click the Events box.
Click the + Add notification link.
Set the preferred value in the Name field.
Mark the All object create events checkbox.
In the Send to field, select the SQS Queue as value.
Select the previously created SQS queue in the SQS field.
Although this collector supports advanced configuration, the fields required to retrieve data with basic configuration are defined below.
Info
This minimum configuration refers exclusively to those specific parameters of this integration. There are more required parameters related to the generic behavior of the collector. Check setting sections for details.
...
Setting
...
Details
...
access_key
...
This is the account identifier for AWS. More info can be found in the section Using a user account and local policies.
...
access_secret
...
This is the secret (kind of a password) for AWS. More info can be found in the section Using a user account and local policies.
...
base_assume_role
...
This allows assuming a role with limited privileges to access AWS services. More info can be found in the sections Assuming a role (self-account) and/or Assuming a role (cross-account).
...
target_assume_role
...
This allows assuming a role on another AWS account with limited privileges to access AWS services. More info can be found in the section Assuming a role (cross-account).
...
assume_role_external_id
...
This is an optional field that provides additional security to the assuming role operation on cross-accounts. More info can be found in the section Assuming a role (cross-account).
Info
See the Accepted authentication methods section to verify what settings are required based on the desired authentication method.
Accepted authentication methods
Depending on how did you obtain your credentials, you will have to either fill in or delete the following properties on the JSON credentials configuration block.
...
Authentication method
...
access_key
...
access_secret
...
base_assume_role
...
target_assume_role
...
assume_role_external_id
...
Access Key / Access Secret
...
Status
colour
Green
title
REQUIRED
...
Status
colour
Green
title
REQUIRED
...
...
...
...
Assume role (self-account)
...
Status
colour
Green
title
REQUIRED
...
Status
colour
Green
title
REQUIRED
...
Status
colour
Green
title
REQUIRED
...
...
...
Assume role (cross-account)
...
...
...
Status
colour
Green
title
REQUIRED
...
Status
colour
Green
title
REQUIRED
...
Status
colour
Yellow
title
OPTIONAL
Run the collector
Once the data source is configured, you can either send us the required information if you want us to host and manage the collector for you (Cloud collector), or deploy and host the collector in your own machine using a Docker image (On-premise collector).
Rw ui tabs macro
Rw tab
title
On-premise collector
This data collector can be run in any machine that has the Docker service available because it should be executed as a docker container. The following sections explain how to prepare all the required setup for having the data collector running.
Structure
The following directory structure should be created for being used when running the collector:
In Devo, go to Administration → Credentials → X.509 Certificates, download the Certificate, Private key and Chain CA and save them in <product_name>/certs/. Learn more about security credentials in Devo here.
In the Defined rule detail section, fill the required data and select the Rule type called Rule with an event pattern.
In the Build event pattern section, select All events.*
In the Select Target section, select AWS target as a target type and fill the SQS queue information. In the Message group ID write devo-collector.
Optional step. Configure tags section.
In the Review and create section, just check the different sections and once everything is correct, click on Create rule.
Info
( * )Note for Security Hub
To retrieve Security Hub Findings, in Build event pattern section, select AWS events or EventBridge partner events in Event source. Then, go to Sample events - optional part and select AWS events in Sample event type. In Sample events select Security Hub Findings - Custom Action
Steps to enable Audit Events
No actions are required in Cloudtrail Service for retrieving this kind of information when the API approach is used (setting types as audit_apis).
For the S3+SQS approach (setting types as audits_s3) some previous configuration is required. Find a complete description of how to create an S2 +SQS pipeline here.
Steps to enable Metrics
No actions are required in CloudWatch Metrics service for retrieving this kind of information.
Steps to enable Logs
Logs can be collected from different services. Depending on the type, some previous setups must be applied on AWS:
Expand
title
CloudWatch Logs
No actions are required in this service for retrieving this kind of information.
Expand
title
VPC Flow Logs
Before enabling the generation of these logs some structures must be created: one bucket in the S3 service and one FIFO queue in the SQS service.
Follow the steps to create those structures manually:
Create SQS Stadard queue
Go to Simple Queue Service and click on Create queue.
In the Details section. Choose the Standard queue type.
In the Configuration section set the Message retention period field value to 5 days and leave the rest of the values from the Configuration section with the default ones.
In the Access Policy section choose method Advanced and adapt this value Principal: {"AWS·:·<account_id>"}.
Optional. In the Tag section create one tag with key “usedBy” and value “devo-collector”.
Click on Create queue button.
Create or configure S3 bucket
Go to S3 and click on Create bucket button.
Set the preferred value in the Bucket name field.
Choose any Region value.
Click on the next button.
Optional. Create one tag with Key. “usedBy” and value “devo-collector”.
Leave all values with the default ones and click on the next button.
Click on Create bucket button.
Mark the checkbox next to the previously created S3 Bucket.
Mark the checkbox next to the previously created S3 bucket.
n the popup box click on Copy Bucket ARN button and save the content for being used in the next steps.
In S3 bucket list click on the previously created bucket name link.
Click on the Properties tab.
Click on the Events box.
Click on the Add notification link.
Set the preferred value in Name field.
Mark All object create events checkbox.
In Send to field select SQS Queue as value.
Select the previously created SQS queue in SQS field.
Create Flow Log
Go to VPC service.
Select any available VPC (or create a new one).
Choose Flow Logs tab.
Click on Create flow log button.
Choose the preferred Filter value.
Choose the preferred Maximum aggregation interval value.
Select as Destination field value Send to an S3 bucket.
In S3 bucket ARN field value set the ARN of the previously created S3 bucket (Saved in a previous step).
Be sure that the format field has set the value AWS default format.
Optional. Create one tag with Key "usedBy" and Value "devo-collector"
Click on Create button.
Expand
title
Cloudfront Logs
Before enabling the generation of these logs some structures must be created: one bucket in the S3 service and one FIFO queue in the SQS service.
For the manual creation of these required structures, please follow the next steps:
Create SQS Standard queue
Go to Simple Queue Service and click on Create queue button.
In the Details section choose Standardqueue typ and set the Name field value you prefer.
In the Configuration section set the Message retention period field value to 5 Day and leave the rest values from Configuration section with the default ones.
In the Access policy section choose method Advance and replace "Principal": {"AWS":"<account_id>"} with "Principal": "*" (leave rest of JSON as come).
Optional. In the Tags section create one tag with Key "usedBy" and Value "devo-collector"
Click on Create queue button.
Create or configure S3 bucket
Go to S3 and click on Create bucket button.
Set the preferred value in the Bucket name field.
Choose any Region value.
Click on the next button.
Optional. Create one tag with Key. “usedBy” and value “devo-collector”.
Leave all values with the default ones and click on the next button.
Click on Create bucket button.
Mark the checkbox next to the previously created S3 Bucket.
Mark the checkbox next to the previously created S3 bucket.
n the popup box click on Copy Bucket ARN button and save the content for being used in the next steps.
In S3 bucket list click on the previously created bucket name link.
Click on the Properties tab.
Click on the Events box.
Click on the Add notification link.
Set the preferred value in Name field.
Mark All object create events checkbox.
In Send to field select SQS Queue as value.
Select the previously created SQS queue in SQS field.
Allow Loggin in Cloudfront
Go to Cloudfront service.
Click on ID field link of the target Distribution item (for accessing to Distributing Settings options).
Click on the Edit button.
In Logging choose the value On.
In the Bucket for Logs field value set the ARN of the previously created S3 bucket (Saved in a previous step).
Click on Yes and then click on the Edit button.
Steps to enable Cisco Umbrella logs
Action
Steps
SQS Standard queue creation
Go to Simple Queue Service and click Create queue.
In the Details section:
Choose Standard queue type.
Set the Name field value you prefer.
In the Configuration section:
Set the Message retention period field value to 5 Days.
Leave the rest values from Configuration section with the default ones.
In the Access policy section:
Choose method Advanced.
Replace "Principal": {"AWS":"<account_id>"} with "Principal": "*" (leave rest of JSON as come)
(Not mandatory)Tags section:
Create one tag with Key “usedBy“ and Value “devo-collector“
Click on Create queue button.
S3 bucket creation/configuration
Go to S3 and click on Create bucket button.
Set the preferred value in Bucket name field.
Choose any Region value.
Click the Next button.
(Not mandatory) Create one tag with KeyusedBy and Valuedevo-collector.
Leave rest of fields with default values, click the Next button.
Leave all values with default ones, click the Next button.
Click the Create bucket button.
Mark the checkbox next to the previously created S3 bucket.
In the popup box, click the Copy Bucket ARN button and save the content for being used in the next steps.
In S3 bucket list, click the previously created bucket name link.
Click the Properties tab.
Click the Events box.
Click the + Add notification link.
Set the preferred value in the Name field.
Mark the All object create events checkbox.
In the Send to field, select the SQS Queue as value.
Select the previously created SQS queue in the SQS field.
Although this collector supports advanced configuration, the fields required to retrieve data with basic configuration are defined below.
Info
This minimum configuration refers exclusively to those specific parameters of this integration. There are more required parameters related to the generic behavior of the collector. Check setting sections for details.
Setting
Details
access_key
This is the account identifier for AWS. More info can be found in the section Using a user account and local policies.
access_secret
This is the secret (kind of a password) for AWS. More info can be found in the section Using a user account and local policies.
base_assume_role
This allows assuming a role with limited privileges to access AWS services. More info can be found in the sections Assuming a role (self-account) and/or Assuming a role (cross-account).
target_assume_role
This allows assuming a role on another AWS account with limited privileges to access AWS services. More info can be found in the section Assuming a role (cross-account).
assume_role_external_id
This is an optional field that provides additional security to the assuming role operation on cross-accounts. More info can be found in the section Assuming a role (cross-account).
Info
See the Accepted authentication methods section to verify what settings are required based on the desired authentication method.
Accepted authentication methods
Depending on how did you obtain your credentials, you will have to either fill in or delete the following properties on the JSON credentials configuration block.
Authentication method
access_key
access_secret
base_assume_role
target_assume_role
assume_role_external_id
Access Key / Access Secret
Status
colour
Green
title
REQUIRED
Status
colour
Green
title
REQUIRED
Assume role (self-account)
Status
colour
Green
title
REQUIRED
Status
colour
Green
title
REQUIRED
Status
colour
Green
title
REQUIRED
Assume role (cross-account)
Status
colour
Green
title
REQUIRED
Status
colour
Green
title
REQUIRED
Status
colour
Yellow
title
OPTIONAL
Run the collector
Once the data source is configured, you can either send us the required information if you want us to host and manage the collector for you (Cloud collector), or deploy and host the collector in your own machine using a Docker image (On-premise collector).
Rw ui tabs macro
Rw tab
title
Cloud collector
We use a piece of software called Collector Server to host and manage all our available collectors.
To enable the collector for a customer:
In the Collector ServerGUI, access the domain in which you want this instance to be created
Click Add Collector and find the one you wish to add.
In the Version field, select the latest value.
In the Collector Name field, set the value you prefer (this name must be unique inside the same Collector Server domain).
In the sending method select Direct Send. Direct Send configuration is optional for collectors that create Table events, but mandatory for those that create Lookups.
In the Parameters section, establish the Collector Parameters as follows below:
All defined service entities will be executed by the collector. If you do not want to run any of them, just remove the entity from the services object.
Please replace the placeholders with real world values following the description table below:
Parameter
Data type
Type
Value range
Details
short_unique_id
int
Mandatory
Minimum length: 1 Maximum length: 5
Use this param to give an unique id to this input service.
Note
This parameter is used to build the persistence address, do not use the same value for multiple collectors. It could cause a collision.
access_key_value
str
See Accepted authentication methods section above.
Minimum length: 1
The access key ID value obtained from AWS when a user is created to access programmatically. It is used when authenticating with a user account and also to assume a self-account role.
access_secret_value
str
See Accepted authentication methods section above.
Minimum length: 1
The secret access key value obtained from AWS when a user is created to access programatically. It is used when authenticating with a user account and also to assume a self-account role.
base_assume_role_value
str
See Accepted authentication methods section above.
Minimum length: 1
The ARN of the role to be assumed in the base account. It can be used for self- or cross-account authentication methods.
target_assume_role_value
str
See Accepted authentication methods section above.
Minimum length: 1
The ARN of the role to be assumed in the customer’s account. It is used for cross-account authentication method.
assume_role_external_id_value
str
See Accepted authentication methods section above.
Minimum length: 1
This is an optional string implemented by the customer to add an extra security layer. It can only be used for cross-account authentication method.
request_period_in_seconds_value
int
Optional
Minimum length: 1
Period in seconds used between each data pulling, this value will overwrite the default value (300 seconds)
This parameter should be removed if it is not used.
auto_event_type_value
bool
Optional
true/false
Used to enable the auto categorization of message tagging.
start_time_value
datetime
Mandatory for GuardDuty, Optional for the rest of services.
1970-01-01T00:00:00.000Z
Date time from which to start collecting data. It must match ISO-8601 format. Note that this is mandatory for GuardDuty and optional for the rest of services.
list_of_types
list (of strings)
Optional
Code Block
"types" : [
"type1",
"type2",
"type3"
]
Enable/Disable modules only when several modules per service are defined. For example, to get audit events from the API, this field should be set to audits_api.
list_of_regions
list (of strings)
Mandatory, if defined in the collector’s definition.
Code Block
"regions": [
"region1",
"region2",
"region3"
]
Property name (regions) should be aligned with the one defined in the submodules_property property from the “Collector definitions”
If the value in eventName field matches any of the values in this field, the event will be discarded.
i.e. if this parameter is populated with the next values ["Decrypt", "AssumeRole"], and the value of eventName field is Decrypt or AssumeRole, the event will be discarded.
sqs_queue_name_value
str
Mandatory
Minimum length: 1
Name of the SQS queue to read from.
s3_file_type_filter_value
str
Optional
Minimum length: 1
RegEx to retrieve proper file type from S3, in case there are more than one file types in the same SQS queue from which the service is pulling data.
This parameter can be used for those services getting data from a S3+SQS pipeline.
use_region_and_account_id_from_event_value
bool
Optional
true/false
If true the region and account_id are taken from the event; else if false, they are taken from the account used to do the data pulling.
Default: true
It can be used only in those services using a S3+SQS pipeline.
log_group_value
str
Mandatory
Minimum length: 1
The log group name must be set here as-is, including the different levels separated by slashes. It can be set to'/' (forward slash) as well to get all the log group names.
Image Added
Rw tab
title
On-premise collector
This data collector can be run in any machine that has the Docker service available because it should be executed as a docker container. The following sections explain how to prepare all the required setup for having the data collector running.
Structure
The following directory structure should be created for being used when running the collector:
In Devo, go to Administration → Credentials → X.509 Certificates, download the Certificate, Private key and Chain CA and save them in <product_name>/certs/. Learn more about security credentials in Devo here.
All defined service entities will be executed by the collector. If you do not want to run any of them, just remove the entity from the services object.
Replace the placeholders with your required values following the description table below:
Parameter
Data Type
Type
Value Range
Details
collector_id
int
Mandatory
Minimum length: 1 Maximum length: 5
Use this param to give an unique id to this collector.
collector_name
str
Mandatory
Minimum length: 1 Maximum length: 10
Use this param to give a valid name to this collector.
devo_address
str
Mandatory
collector-us.devo.io collector-eu.devo.io
Use this param to identify the Devo Cloud where the events will be sent.
chain_filename
str
Mandatory
Minimum length: 4 Maximum length: 20
Use this param to identify the chain.cert file downloaded from your Devo domain. Usually this file's name is: chain.crt
cert_filename
str
Mandatory
Minimum length: 4 Maximum length: 20
Use this param to identify the file.cert downloaded from your Devo domain.
key_filename
str
Mandatory
Minimum length: 4 Maximum length: 20
Use this param to identify the file.key downloaded from your Devo domain.
short_unique_id
int
Mandatory
Minimum length: 1 Maximum length: 5
Use this param to give an unique id to this input service.
Note
This parameter is used to build the persistence address, do not use the same value for multiple collectors. It could cause a collision.
access_key_value
str
See Accepted authentication methods section above.
Minimum length: 1
The access key ID value obtained from AWS when a user is created to access programmatically. It is used when authenticating with a user account and also to assume a self-account role.
access_secret_value
str
See Accepted authentication methods section above.
Minimum length: 1
The secret access key value obtained from AWS when a user is created to access programatically. It is used when authenticating with a user account and also to assume a self-account role.
base_assume_role_value
str
See Accepted authentication methods section above.
Minimum length: 1
The ARN of the role to be assumed in the base account. It can be used for self- or cross-account authentication methods.
target_assume_role_value
str
See Accepted authentication methods section above.
Minimum length: 1
The ARN of the role to be assumed in the customer’s account. It is used for cross-account authentication method.
assume_role_external_id_value
str
See Accepted authentication methods section above.
Minimum length: 1
This is an optional string implemented by the customer to add an extra security layer. It can only be used for cross-account authentication method.
request_period_in_seconds_value
int
Optional
Minimum length: 1
Period in seconds used between each data pulling, this value will overwrite the default value (300 seconds)
Info
This parameter should be removed if it is not used.
auto_event_type_value
bool
Optional
true/false
Used to enable the auto categorization of message tagging.
start_time_value
datetime
Optional
1970-01-01T00:00:00.000Z
Datetime from which to start collecting data. It must match ISO-8601 format.
list_of_types
list (of strings)
Optional
Code Block
types:
- type1
- type2
- type3
Enable/Disable modules only when several modules per service are defined. For example, to get audit events from the API, this field should be set to audits_api.
list_of_regions
list (of strings)
Mandatory, if defined in the collector’s definition.
Code Block
regions:
- region1
- region2
- region3
Property name (regions) should be aligned with the one defined in the submodules_property property from the “Collector definitions”
list_of_drop_event_names
list (of strings)
Optional
Code Block
drop_event_names:
- drop1
- drop2
- drop3
If the value in eventName field matches any of the values in this field, the event will be discarded.
i.e. if this parameter is populated with the next values ["Decrypt", "AssumeRole"], and the value of eventName field is Decrypt or AssumeRole, the event will be discarded.
sqs_queue_name_value
str
Mandatory
Minimum length: 1
Name of the SQS queue to read from.
s3_file_type_filter_value
str
Optional
Minimum length: 1
RegEx to retrieve proper file type from S3, in case there are more than one file types in the same SQS queue from which the service is pulling data.
This parameter can be used for those services getting data from a S3+SQS pipeline.
use_region_and_account_id_from_event_value
bool
Optional
true/false
If true the region and account_id are taken from the event; else if false, they are taken from the account used to do the data pulling.
Default: true
It can be used only in those services using a S3+SQS pipeline.
log_group_value
str
Mandatory
Minimum length: 1
The log group name must be set here as-is, including the different levels separated by slashes. It can be set to'/' (forward slash) as well to get all the log group names.
Download the Docker image
The collector should be deployed as a Docker container. Download the Docker image of the collector as a .tgz file by clicking the link in the following table:
Once the Docker image is imported, it will show the real name of the Docker image (including version info). Replace <image_file> and <version> with a proper value.
The Docker image can be deployed on the following services:
Docker
Execute the following command on the root directory <any_directory>/devo-collectors/<product_name>/
Replace <product_name>, <image_name> and <version> with the proper values.
Docker Compose
The following Docker Compose file can be used to execute the Docker container. It must be created in the <any_directory>/devo-collectors/<product_name>/ directory.
To run the container using docker-compose, execute the following command from the <any_directory>/devo-collectors/<product_name>/ directory:
Code Block
IMAGE_VERSION=<version> docker-compose up -d
Note
Replace <product_name>, <image_name> and <version> with the proper values.
Rw tab
title
Cloud collector
We use a piece of software called Collector Server to host and manage all our available collectors.
To enable the collector for a customer:
In the Collector ServerGUI, access the domain in which you want this instance to be created
Click Add Collector and find the one you wish to add.
In the Version field, select the latest value.
In the Collector Name field, set the value you prefer (this name must be unique inside the same Collector Server domain).
In the sending method select Direct Send. Direct Send configuration is optional for collectors that create Table events, but mandatory for those that create Lookups.
In the Parameters section, establish the Collector Parameters as follows below:
All defined service entities will be executed by the collector. If you do not want to run any of them, just remove the entity from the services object.
Please replace the placeholders with real world values following the description table below:
Parameter
Data Type
Type
Value Range
Details
short_unique_id
int
Mandatory
Minimum length: 1 Maximum length: 5
Use this param to give an unique id to this input service.
Note
This parameter is used to build the persistence address, do not use the same value for multiple collectors. It could cause a collision.
access_key_value
str
See Accepted authentication methods section above.
Minimum length: 1
The access key ID value obtained from AWS when a user is created to access programmatically. It is used when authenticating with a user account and also to assume a self-account role.
access_secret_value
str
See Accepted authentication methods section above.
Minimum length: 1
The secret access key value obtained from AWS when a user is created to access programatically. It is used when authenticating with a user account and also to assume a self-account role.
base_assume_role_value
str
See Accepted authentication methods section above.
Minimum length: 1
The ARN of the role to be assumed in the base account. It can be used for self- or cross-account authentication methods.
target_assume_role_value
str
See Accepted authentication methods section above.
Minimum length: 1
The ARN of the role to be assumed in the customer’s account. It is used for cross-account authentication method.
assume_role_external_id_value
str
See Accepted authentication methods section above.
Minimum length: 1
This is an optional string implemented by the customer to add an extra security layer. It can only be used for cross-account authentication method.
request_period_in_seconds_value
int
Optional
Minimum length: 1
Period in seconds used between each data pulling, this value will overwrite the default value (300 seconds)
This parameter should be removed if it is not used.
auto_event_type_value
bool
Optional
true/false
Used to enable the auto categorization of message tagging.
start_time_value
datetime
Optional
1970-01-01T00:00:00.000Z
Datetime from which to start collecting data. It must match ISO-8601 format.
list_of_types
list (of strings)
Optional
Code Block
"types" : [
"type1",
"type2",
"type3"
]
Enable/Disable modules only when several modules per service are defined. For example, to get audit events from the API, this field should be set to audits_api.
list_of_regions
list (of strings)
Mandatory, if defined in the collector’s definition.
Code Block
"regions": [
"region1",
"region2",
"region3"
]
Property name (regions) should be aligned with the one defined in the submodules_property property from the “Collector definitions”
If the value in eventName field matches any of the values in this field, the event will be discarded.
i.e. if this parameter is populated with the next values ["Decrypt", "AssumeRole"], and the value of eventName field is Decrypt or AssumeRole, the event will be discarded.
sqs_queue_name_value
str
Mandatory
Minimum length: 1
Name of the SQS queue to read from.
s3_file_type_filter_value
str
Optional
Minimum length: 1
RegEx to retrieve proper file type from S3, in case there are more than one file types in the same SQS queue from which the service is pulling data.
This parameter can be used for those services getting data from a S3+SQS pipeline.
use_region_and_account_id_from_event_value
bool
Optional
true/false
If true the region and account_id are taken from the event; else if false, they are taken from the account used to do the data pulling.
Default: true
It can be used only in those services using a S3+SQS pipeline.
log_group_value
str
Mandatory
Minimum length: 1
The log group name must be set here as-is, including the different levels separated by slashes. It can be set to'/' (forward slash) as well to get all the log group names.
Image Removed
Collector services detail
This section is intended to explain how to proceed with specific actions for services.
All defined service entities will be executed by the collector. If you do not want to run any of them, just remove the entity from the services object.
Replace the placeholders with your required values following the description table below:
Parameter
Data Type
Type
Value Range
Details
collector_id
int
Mandatory
Minimum length: 1 Maximum length: 5
Use this param to give an unique id to this collector.
collector_name
str
Mandatory
Minimum length: 1 Maximum length: 10
Use this param to give a valid name to this collector.
devo_address
str
Mandatory
collector-us.devo.io collector-eu.devo.io
Use this param to identify the Devo Cloud where the events will be sent.
chain_filename
str
Mandatory
Minimum length: 4 Maximum length: 20
Use this param to identify the chain.cert file downloaded from your Devo domain. Usually this file's name is: chain.crt
cert_filename
str
Mandatory
Minimum length: 4 Maximum length: 20
Use this param to identify the file.cert downloaded from your Devo domain.
key_filename
str
Mandatory
Minimum length: 4 Maximum length: 20
Use this param to identify the file.key downloaded from your Devo domain.
short_unique_id
int
Mandatory
Minimum length: 1 Maximum length: 5
Use this param to give an unique id to this input service.
Note
This parameter is used to build the persistence address, do not use the same value for multiple collectors. It could cause a collision.
access_key_value
str
See Accepted authentication methods section above.
Minimum length: 1
The access key ID value obtained from AWS when a user is created to access programmatically. It is used when authenticating with a user account and also to assume a self-account role.
access_secret_value
str
See Accepted authentication methods section above.
Minimum length: 1
The secret access key value obtained from AWS when a user is created to access programatically. It is used when authenticating with a user account and also to assume a self-account role.
base_assume_role_value
str
See Accepted authentication methods section above.
Minimum length: 1
The ARN of the role to be assumed in the base account. It can be used for self- or cross-account authentication methods.
target_assume_role_value
str
See Accepted authentication methods section above.
Minimum length: 1
The ARN of the role to be assumed in the customer’s account. It is used for cross-account authentication method.
assume_role_external_id_value
str
See Accepted authentication methods section above.
Minimum length: 1
This is an optional string implemented by the customer to add an extra security layer. It can only be used for cross-account authentication method.
request_period_in_seconds_value
int
Optional
Minimum length: 1
Period in seconds used between each data pulling, this value will overwrite the default value (300 seconds)
Info
This parameter should be removed if it is not used.
auto_event_type_value
bool
Optional
true/false
Used to enable the auto categorization of message tagging.
start_time_value
datetime
Mandatory for GuardDuty, Optional for the rest of services.
1970-01-01T00:00:00.000Z
Date time from which to start collecting data. It must match ISO-8601 format. Note that this is mandatory for GuardDuty and optional for the rest of services.
list_of_types
list (of strings)
Optional
Code Block
types:
- type1
- type2
- type3
Enable/Disable modules only when several modules per service are defined. For example, to get audit events from the API, this field should be set to audits_api.
list_of_regions
list (of strings)
Mandatory, if defined in the collector’s definition.
Code Block
regions:
- region1
- region2
- region3
Property name (regions) should be aligned with the one defined in the submodules_property property from the “Collector definitions”
list_of_drop_event_names
list (of strings)
Optional
Code Block
drop_event_names:
- drop1
- drop2
- drop3
If the value in eventName field matches any of the values in this field, the event will be discarded.
i.e. if this parameter is populated with the next values ["Decrypt", "AssumeRole"], and the value of eventName field is Decrypt or AssumeRole, the event will be discarded.
sqs_queue_name_value
str
Mandatory
Minimum length: 1
Name of the SQS queue to read from.
s3_file_type_filter_value
str
Optional
Minimum length: 1
RegEx to retrieve proper file type from S3, in case there are more than one file types in the same SQS queue from which the service is pulling data.
This parameter can be used for those services getting data from a S3+SQS pipeline.
use_region_and_account_id_from_event_value
bool
Optional
true/false
If true the region and account_id are taken from the event; else if false, they are taken from the account used to do the data pulling.
Default: true
It can be used only in those services using a S3+SQS pipeline.
log_group_value
str
Mandatory
Minimum length: 1
The log group name must be set here as-is, including the different levels separated by slashes. It can be set to'/' (forward slash) as well to get all the log group names.
Download the Docker image
The collector should be deployed as a Docker container. Download the Docker image of the collector as a .tgz file by clicking the link in the following table:
Once the Docker image is imported, it will show the real name of the Docker image (including version info). Replace <image_file> and <version> with a proper value.
The Docker image can be deployed on the following services:
Docker
Execute the following command on the root directory <any_directory>/devo-collectors/<product_name>/
Replace <product_name>, <image_name> and <version> with the proper values.
Docker Compose
The following Docker Compose file can be used to execute the Docker container. It must be created in the <any_directory>/devo-collectors/<product_name>/ directory.
To run the container using docker-compose, execute the following command from the <any_directory>/devo-collectors/<product_name>/ directory:
Code Block
IMAGE_VERSION=<version> docker-compose up -d
Note
Replace <product_name>, <image_name> and <version> with the proper values.
Collector services detail
This section is intended to explain how to proceed with specific actions for services.
Service events (all services)
This service could be considered a general AWS event puller. It reads events from all the AWS services, which are managed by CloudWatch.
Expand
title
Devo categorization and destination
If auto_event_type parameter is not set or is set to false, the events are going to be ingested into the table cloud.aws.cloudwatch.events
If auto_event_type parameter is set to true, the events are going to be ingested into the table cloud.aws.cloudwatch.{event_type}
Expand
title
Verify data collection
Once the collector has been launched, it is important to check if the ingestion is performed in a proper way. To do so, go to the collector’s logs console.
This service has the following components:
Component
Description
Setup
The setup module is in charge of authenticating the service and managing the token expiration when needed.
Puller
The setup module is in charge of pulling the data in a organized way and delivering the events via SDK.
Setup output
A successful run has the following output messages for the setup module:
Code Block
INFO InputProcess::AwsCloudwatchEventsPullerSetup(aws,aws#abc123,service-events-all#predefined,us-east-2) -> Session cannot expire. Using user/profile authentication.
INFO InputProcess::AwsCloudwatchEventsPullerSetup(aws,aws#abc123,service-events-all#predefined,us-east-2) -> Creating user session
INFO InputProcess::AwsCloudwatchEventsPullerSetup(aws,aws#abc123,service-events-all#predefined,us-east-2) -> New AWS session started.
INFO InputProcess::AwsCloudwatchEventsPullerSetup(aws,aws#abc123,service-events-all#predefined,us-east-2) -> Setup for module <AwsCloudwatchEventsPuller> has been successfully executed
Puller output
A successful initial run has the following output messages for the puller module:
Info
Note that the PrePull action is executed only one time before the first run of the Pull action.
Code Block
INFO InputProcess::AwsCloudwatchEventsPuller(aws,abc123,service-events-all,predefined,us-east-2) -> Starting data collection every 60 seconds
INFO InputProcess::AwsCloudwatchEventsPuller(aws,abc123,service-events-all,predefined,us-east-2) -> Starting a new pulling from "dc-aws-cloudwatch-test-1.fifo" queue at "2022-09-23T07:44:54.589769+00:00"
INFO InputProcess::AwsCloudwatchEventsPuller(aws,abc123,service-events-all,predefined,us-east-2) -> Received 198 response(s), received 1973 message(s), generated 1973 message(s), detected_event_types: ["ssm", "s3", "sts", "backup", "kms", "tag", "config", "logs", "cloudtrail"], avg_time_per_source_message: 335.170 ms
INFO InputProcess::AwsCloudwatchEventsPuller(aws,abc123,service-events-all,predefined,us-east-2) -> Starting a new pulling from "dc-aws-cloudwatch-test-1.fifo" queue at "2022-09-23T07:55:55.546142+00:00"
INFO InputProcess::AwsCloudwatchEventsPuller(aws,abc123,service-events-all,predefined,us-east-2) -> Received 1 response(s), received 0 message(s), generated 0 message(s), detected_event_types: [], avg_time_per_source_message: 437.862 ms
INFO InputProcess::AwsCloudwatchEventsPuller(aws,abc123,service-events-all,predefined,us-east-2) -> Data collection completed. Elapsed time: 0.438 seconds. Waiting for 59.562 second(s) until the next one
After a successful collector’s execution (that is, no error logs found), you will see the following log message:
Code Block
INFO ThreatQuotientDataPuller(threatquotient_collector,threatquotient_data_puller#111,events#predefined) -> Statistics for this pull cycle (@devo_pulling_id=1655983326.290848): Number of requests performed: 2; Number of events received: 52; Number of duplicated events filtered out: 0; Number of events generated and sent: 52 (from 52 unflattened events); Average of events per second: 92.99414315733.
Info
The value @devo_pulling_id is injected in each event to group all events ingested by the same pull action. You can use it to get the exact events downloaded in that Pull action in Devo’s search window.
Expand
title
Restart the persistence
This collector does not use any kind of persistent storage.
Service events (Security Hub)
This service is used to read specifically Security Hub events, which need to be processed in a different way.
Expand
title
Devo categorization and destination
Using this service, all the Security Hub events are going to be ingested into the table cloud.aws.securityhub.findings
Expand
title
Verify data collection
Once the collector has been launched, it is important to check if the ingestion is performed in a proper way. To do so, go to the collector’s logs console.
This service has the following components:
Component
Description
Setup
The setup module is in charge of authenticating the service and managing the token expiration when needed.
Puller
The setup module is in charge of pulling the data in a organized way and delivering the events via SDK.
Setup output
A successful run has the following output messages for the setup module:
Code Block
INFO InputProcess::AwsCloudwatchEventsPullerSetup(aws,aws#abc123,service-events-all#predefined,us-east-2) -> Session cannot expire. Using user/profile authentication.
INFO InputProcess::AwsCloudwatchEventsPullerSetup(aws,aws#abc123,service-events-all#predefined,us-east-2) -> Creating user session
INFO InputProcess::AwsCloudwatchEventsPullerSetup(aws,aws#abc123,service-events-all#predefined,us-east-2) -> New AWS session started.
INFO InputProcess::AwsCloudwatchEventsPullerSetup(aws,aws#abc123,service-events-all#predefined,us-east-2) -> Setup for module <AwsCloudwatchEventsPuller> has been successfully executed
Puller output
A successful initial run has the following output messages for the puller module:
Code Block
INFO InputProcess::AwsCloudwatchEventsPuller(aws,abc123,sqs-cloudwatch-consumer,predefined,us-east-2) -> Starting data collection every 60 seconds
INFO InputProcess::AwsCloudwatchEventsPuller(aws,abc123,sqs-cloudwatch-consumer,predefined,us-east-2) -> Starting a new pulling from "cloudwatch-test.fifo" queue at "2022-09-23T08:11:50.440225+00:00"
INFO InputProcess::AwsCloudwatchEventsPuller(aws,abc123,sqs-cloudwatch-consumer,predefined,us-east-2) -> Received 1 response(s), received 0 message(s), generated 0 message(s), detected_event_types: [], avg_time_per_source_message: 519.301 ms
INFO InputProcess::AwsCloudwatchEventsPuller(aws,abc123,sqs-cloudwatch-consumer,predefined,us-east-2) -> Data collection completed. Elapsed time: 0.520 seconds. Waiting for 59.480 second(s) until the next one
Info
The @devo_pulling_id value is injected into each event to allow grouping all events ingested by the same pull action. You can use it to get the exact events downloaded on that Pull action in Loxcope.
Expand
title
Restart the persistence
This collector does not use any kind of persistent storage.
Audit events (via API)
This service reads Cloudtrail audit events via API.
There are two ways to read Cloudtrail events: via API or via S3+SQS.
API: It is slower, but can read past events.
S3+SQS: It is much faster, but can only read events since the creation of the queue.
This service makes use of the AWS API to get the data.
Expand
title
Devo categorization and destination
If auto_event_type parameter is not set or is set to false, the events are going to be ingested into the table cloud.aws.cloudtrail.events.
If auto_event_type parameter is set to true, the events are going to be ingested into the table cloud.aws.cloudtrail.{event_type}.
Expand
title
Verify data collection
Once the collector has been launched, it is important to check if the ingestion is performed in a proper way. To do so, go to the collector’s logs console.
This service has the following components:
Component
Description
Setup
The setup module is in charge of authenticating the service and managing the token expiration when needed.
Puller
The setup module is in charge of pulling the data in a organized way
and delivering the events via SDK.
Setup output
A successful
and delivering the events via SDK.
Setup output
A successful run has the following output messages for the setup module:
Code Block
INFO InputProcess::AwsCloudtrailApiPullerSetup(aws,aws#abc123,audit-events-all#predefined,us-east-2) -> Session cannot expire. Using user/profile authentication.
INFO InputProcess::AwsCloudtrailApiPullerSetup(aws,aws#abc123,audit-events-all#predefined,us-east-2) -> Creating user session
INFO InputProcess::AwsCloudtrailApiPullerSetup(aws,aws#abc123,audit-events-all#predefined,us-east-2) -> New AWS session started.
INFO InputProcess::AwsCloudtrailApiPullerSetup(aws,aws#abc123,audit-events-all#predefined,us-east-2) -> Setup for module <AwsCloudtrailApiPuller> has been successfully executed
Puller output
A successful initial run has the following output messages for the setup puller module:
Info
Note that the PrePull action is executed only one time before the first run of the Pull action.
Code Block
INFO InputProcess::AwsCloudwatchEventsPullerSetupAwsCloudtrailApiPuller(aws,aws#abc123abc123,serviceaudit-events-all#predefinedall,predefined,us-east-2) -> SessionStartingcannotdataexpire.collectionUsingeveryuser/profile authentication.60 seconds
INFO InputProcess::AwsCloudwatchEventsPullerSetupAwsCloudtrailApiPuller(aws,aws#abc123abc123,serviceaudit-events-all#predefinedall,predefined,us-east-2) -> Creating user session Starting a new pulling from "['all_sources']" source at "2022-09-23T08:56:22.366820+00:00"
INFO InputProcess::AwsCloudwatchEventsPullerSetupAwsCloudtrailApiPuller(aws,aws#abc123abc123,serviceaudit-events-all#predefinedall,predefined,us-east-2) -> New AWS session started.
INFO InputProcess::AwsCloudwatchEventsPullerSetup(aws,aws#abc123,service-events-all#predefined,us-east-2) -> Setup for module <AwsCloudwatchEventsPuller> has been successfully executed
Puller output
A successful initial run has the following output messages for the puller module:
Info
Note that the PrePull action is executed only one time before the first run of the Pull action.
Code Block
INFO InputProcess::AwsCloudwatchEventsPuller(aws,abc123,service Using 15 minutes as "gap until now", start_date: "2022-09-12T12:34:56.123456+00:00", end_date: "2022-09-23T08:41:22.366820+00:00", time_slot_in_hours: "1"
INFO InputProcess::AwsCloudtrailApiPuller(aws,abc123,audit-events-all,predefined,us-east-2) -> Starting data collection every 60 seconds Total number of time slots to be processed: 261
...
INFO InputProcess::AwsCloudwatchEventsPullerAwsCloudtrailApiPuller(aws,abc123,serviceaudit-events-all,predefined,us-east-2) -> StartingNumberaofnewprocessedpullingtimefrom "dc-aws-cloudwatch-test-1.fifo" queue at "2022-09-23T07:44:54.589769+00:00"slots so far: 100
...
INFO InputProcess::AwsCloudwatchEventsPullerAwsCloudtrailApiPuller(aws,abc123,serviceaudit-events-all,predefined,us-east-2) -> ReceivedNumber198 response(s), received 1973 message(s), generated 1973 message(s), detected_event_types: ["ssm", "s3", "sts", "backup", "kms", "tag", "config", "logs", "cloudtrail"], avg_time_per_source_message: 335.170 msof processed time slots so far: 200
...
INFO InputProcess::AwsCloudwatchEventsPullerAwsCloudtrailApiPuller(aws,abc123,serviceaudit-events-all,predefined,us-east-2) -> Starting a new pulling from "dc-aws-cloudwatch-test-1.fifo" queue at "2022-09-23T07:55:55.546142+00:00" Received 1315 response(s), messages (total/dropped/other_region/duplicated/generated): 124/6149/0/0/113, tag template used: "cloud.aws.cloudtrail.{event_type}.123456789012.us-east-8.1.prod-1", avg_time_per_source_message: 708.624 ms
INFO InputProcess::AwsCloudwatchEventsPullerAwsCloudtrailApiPuller(aws,abc123,serviceaudit-events-all,predefined,us-east-2) -> Received 1 response(s), received 0 message(s), generated 0 message(s), detected_event_types: [], avg_time_per_source_message: 437.862 msElapsed time: 931.842 seconds. Last retrieval took too much time, no wait will be applied in this loop
INFO InputProcess::AwsCloudwatchEventsPullerAwsCloudtrailApiPuller(aws,abc123,serviceaudit-events-all,predefined,us-east-2) -> Data collection completed. Elapsed time: 02.438717 seconds. Waiting for 5957.562283 second(s) until the next one
After a successful collector’s execution (that is, no error logs found), you will see the following log message:
Code Block
INFO ThreatQuotientDataPuller(threatquotient_collector,threatquotient_data_puller#111,events#predefined) -> Statistics for this pull cycle (@devo_pulling_id=1655983326.290848): Number of requests performed: 2; Number of events received: 52; Number of duplicated events filtered out: 0; Number of events generated and sent: 52 (from 52 unflattened events); Average of events per second: 92.99414315733.InputProcess::AwsCloudtrailApiPuller(aws,abc123,audit-events-all,predefined,us-east-2) -> Received 4 response(s), messages (total/dropped/other_region/duplicated/generated): 186/8/0/1/177, tag template used: "cloud.aws.cloudtrail.{event_type}.123456789012.us-west-8.1.prod-1", avg_time_per_source_message: 678.952 ms
Info
The value @devo_pulling_id is injected in each event to group all events ingested by the same pull action. You can use it to get the exact events downloaded in that Pull action in Devo’s search window.
Expand
title
Restart the persistence
This collector does not use any kind of persistent storage
.
Audit events (via
...
S3 + SQS)
This service reads Cloudtrail audit events via APIthe S3+SQS pipeline.
There are two ways to read Cloudtrail events: via API or via S3+SQS.
API: It is slower, but can read past events.
S3+SQS: It is much faster, but can only read events since the creation of the queue.
...
of the queue.
Expand
title
Devo categorization and destination
If auto_event_type parameter is not set or is set to false, the events are going to be ingested into the table cloud.aws.cloudtrail.events.
If auto_event_type parameter is set to true, the events are going to be ingested into the table cloud.aws.cloudtrail.{event_type}.
Expand
title
Verify data collection
Once the collector has been launched, it is important to check if the ingestion is performed in a proper way. To do so, go to the collector’s logs console.
This service has the following components:
Component
Description
Setup
The setup module is in charge of authenticating the service and managing the token expiration when needed.
Puller
The setup module is in charge of pulling the data in
a organized way and delivering the events via SDK.
Setup output
A successful
a organized way and delivering the events via SDK.
Setup output
A successful run has the following output messages for the setup module:
Code Block
INFO InputProcess::AwsSqsS3GenericPullerSetup(aws,aws#abc123,audit-events-all#predefined,us-east-2) -> Session cannot expire. Using user/profile authentication.
INFO InputProcess::AwsSqsS3GenericPullerSetup(aws,aws#abc123,audit-events-all#predefined,us-east-2) -> Creating user session
INFO InputProcess::AwsSqsS3GenericPullerSetup(aws,aws#abc123,audit-events-all#predefined,us-east-2) -> New AWS session started.
INFO InputProcess::AwsSqsS3GenericPullerSetup(aws,aws#abc123,audit-events-all#predefined,us-east-2) -> Setup for module <AwsSqsS3CloudTrailPuller> has been successfully executed
Puller output
A successful initial run has the following output messages for the setup puller module:
Info
Note that the PrePull action is executed only one time before the first run of the Pull action.
Code Block
INFO InputProcess::AwsCloudtrailApiPullerSetupAwsSqsS3CloudTrailPuller(aws,aws#abc123abc123,audit-events-all#predefinedall,predefined,us-east-2-2) -> Starting data collection every 60 seconds
INFO OutputProcess::OutputStandardConsumer(standard_senders_consumer_0) -> Consumed messages: 1797, total_bytes: 3830368 (60.43562 seconds)
INFO OutputProcess::DevoSender(standard_senders,devo_sender_0) -> Consumed messages: 1797 messages (60.436958 seconds) => 29 msg/sec
INFO OutputProcess::OutputStandardConsumer(standard_senders_consumer_0) -> SessionConsumedcannot expire. Using user/profile authentication.
INFO InputProcess::AwsCloudtrailApiPullerSetup(aws,aws#abc123,audit-events-all#predefined,us-east-2messages: 1652, total_bytes: 3555837 (60.311803 seconds)
INFO OutputProcess::DevoSender(standard_senders,devo_sender_0) -> CreatingConsumedusermessages:session1652INFOmessagesInputProcess::AwsCloudtrailApiPullerSetup(aws,aws#abc123,audit-events-all#predefined,us-east-2) -> New AWS session started.
INFO InputProcess::AwsCloudtrailApiPullerSetup(aws,aws#abc123,audit-events-all#predefined,us-east-2(60.313064 seconds) => 27 msg/sec
INFO OutputProcess::OutputStandardConsumer(standard_senders_consumer_0) -> SetupConsumedformessages:module <AwsCloudtrailApiPuller> has been successfully executed
Puller output
A successful initial run has the following output messages for the puller module:
Info
Note that the PrePull action is executed only one time before the first run of the Pull action.
Code Block
INFO InputProcess::AwsCloudtrailApiPuller(aws,abc123,audit-events-all,predefined,us-east-21949, total_bytes: 4277470 (60.187779 seconds)
INFO OutputProcess::DevoSender(standard_senders,devo_sender_0) -> StartingConsumeddatamessages:collection1949everymessages(60.187248 seconds
INFO InputProcess::AwsCloudtrailApiPuller(aws,abc123,audit-events-all,predefined,us-east-2) -> Starting a new pulling from "['all_sources']" source at "2022-09-23T08:56:22.366820+00:00"
INFO InputProcess::AwsCloudtrailApiPuller(aws,abc123,audit-events-all,predefined,us-east-2) -> Using 15 minutes as "gap until now", start_date: "2022-09-12T12:34:56.123456+00:00", end_date: "2022-09-23T08:41:22.366820+00:00", time_slot_in_hours: "1"
INFO InputProcess::AwsCloudtrailApiPuller(aws,abc123,audit-events-all,predefined,) => 32 msg/sec
...
After a successful collector’s execution (this is, no error logs were found), you should be able to see the following log message:
Info
The @devo_pulling_id value is injected into each event to allow grouping all events ingested by the same pull action. You can use it to get the exact events downloaded on that Pull action in Loxcope.
Expand
title
Restart the persistence
This collector does not use any kind of persistent storage.
Metrics (All metrics)
This service could be considered a general AWS metric puller. It reads metrics from all the AWS services that generate them. Those metrics are also managed by Cloudwatch.
This service makes use of the AWS API to get the data.
Expand
title
Devo categorization and destination
All the events are going to be ingested into the table cloud.aws.cloudwatch.metrics.
Expand
title
Verify data collection
Once the collector has been launched, it is important to check if the ingestion is performed in a proper way. To do so, go to the collector’s logs console.
This service has the following components:
Component
Description
Setup
The setup module is in charge of authenticating the service and managing the token expiration when needed.
Puller
The setup module is in charge of pulling the data in a organized way and delivering the events via SDK.
Setup output
A successful run has the following output messages for the setup module:
Code Block
INFO InputProcess::AwsCloudwatchMetricPullerSetup(aws,aws#123,ec2#predefined,us-east-2) -> Total number of time slots to be processed: 261
...Session cannot expire. Using user/profile authentication.
INFO InputProcess::AwsCloudtrailApiPullerAwsCloudwatchMetricPullerSetup(aws,abc123,audit-events-all,predefined,aws#123,ec2#predefined,us-east-2) -> NumberCreatingof processed time slots so far: 100
...
user session
INFO InputProcess::AwsCloudtrailApiPullerAwsCloudwatchMetricPullerSetup(aws,abc123,audit-events-all,predefinedaws#123,ec2#predefined,us-east-2) -> NumberNewofAWSprocessed time slots so far: 200
..session started.
INFO InputProcess::AwsCloudtrailApiPullerAwsCloudwatchMetricPullerSetup(aws,abc123,audit-events-all,predefined,aws#123,ec2#predefined,us-east-2) -> ReceivedSetup1315 response(s), messages (total/dropped/other_region/duplicated/generated): 124/6149/0/0/113, tag template used: "cloud.aws.cloudtrail.{event_type}.123456789012.us-east-8.1.prod-1", avg_time_per_source_message: 708.624 ms
for module <AwsCloudwatchMetricPuller> has been successfully executed
Puller output
A successful initial run has the following output messages for the puller module:
Info
Note that the PrePull action is executed only one time before the first run of the Pull action.
Code Block
INFO InputProcess::AwsCloudtrailApiPullerAwsCloudwatchMetricPuller(aws,abc123,audit-events-all123,ec2,predefined,us-east-2) -> ElapsedStartingtime: 931.842 seconds. Last retrieval took too much time, no wait will be applied in this loop
data collection every 60 seconds
INFO InputProcess::AwsCloudtrailApiPullerAwsCloudwatchMetricPuller(aws,abc123,audit-events-all123,ec2,predefined,us-east-2) -> DataStartingcollectionacompleted.newElapsedpullingtime: 2.717 seconds. Waiting for 57.283 second(s) until the next one
After a successful collector’s execution (that is, no error logs found), you will see the following log message:
Code Block
from "['AWS/EC2', 'AWS/EC2Spot']" namespaces at "2022-09-23T14:49:36.266007+00:00"
INFO InputProcess::AwsCloudtrailApiPullerAwsCloudwatchMetricPuller(aws,abc123,audit-events-all123,ec2,predefined,us-east-2) -> Received 4 response(s), messages (total/dropped/other_region/duplicated/generated): 186/8/0/1/177, tag template used: "cloud.aws.cloudtrail.{event_type}.123456789012.us-west-8.1.prod-1", avg_time_per_source_message: 678.952 ms
Info
The value @devo_pulling_id is injected in each event to group all events ingested by the same pull action. You can use it to get the exact events downloaded in that Pull action in Devo’s search window.
Audit events (via S3 + SQS)
This service reads Cloudtrail audit events via the S3+SQS pipeline.
There are two ways to read Cloudtrail events: via API or via S3+SQS.
API: It is slower, but can read past events.
S3+SQS: It is much faster, but can only read events since the creation of the queue.
Expand
title
Devo categorization and destination
If auto_event_type parameter is not set or is set to false, the events are going to be ingested into the table cloud.aws.cloudtrail.events.
If auto_event_type parameter is set to true, the events are going to be ingested into the table cloud.aws.cloudtrail.{event_type}.
Expand
title
Verify data collection
Once the collector has been launched, it is important to check if the ingestion is performed in a proper way. To do so, go to the collector’s logs console.
This service has the following components:
Component
Description
Setup
The setup module is in charge of authenticating the service and managing the token expiration when needed.
Puller
The setup module is in charge of pulling the data in a organized way and delivering the events via SDK.
Setup output
A successful run has the following output messages for the setup module:
Code Block
INFO InputProcess::AwsSqsS3GenericPullerSetup(aws,aws#abc123,audit-events-all#predefinedTime range: "2022-09-23T14:48:00Z" > "2022-09-23T14:49:00Z"
INFO InputProcess::AwsCloudwatchMetricPuller(aws,123,ec2,predefined,us-east-2) -> Received 3 response(s), generated 17 message(s), tag used: "cloud.aws.cloudwatch.metrics.936082584952.us-east-2.1", avg_time_per_source_message: 393.845 ms
INFO InputProcess::AwsCloudwatchMetricPuller(aws,123,ec2,predefined,us-east-2) -> SessionApplied ancannotoffsetexpire.toUsing user/profile authentication.
wait, retrieval_offset: -36.266007 seconds
INFO InputProcess::AwsSqsS3GenericPullerSetupAwsCloudwatchMetricPuller(aws,aws#abc123,audit-events-all#predefined123,ec2,predefined,us-east-2) -> CreatingDatausercollectionsessioncompleted.INFOElapsedInputProcess::AwsSqsS3GenericPullerSetup(aws,aws#abc123,audit-events-all#predefined,us-east-2) -> New AWS session started.
INFO InputProcess::AwsSqsS3GenericPullerSetup(aws,aws#abc123,audit-events-all#predefined,us-east-2) -> Setup for module <AwsSqsS3CloudTrailPuller> has been successfully executed
Puller output
A successful initial run has the following output messages for the puller module:
Info
Note that the PrePull action is executed only one time before the first run of the Pull action.
Code Block
INFO InputProcess::AwsSqsS3CloudTrailPuller(aws,abc123,audit-events-all,predefined,us-east-2) -> Starting data collection every 60 seconds
INFO OutputProcess::OutputStandardConsumer(standard_senders_consumer_0) -> Consumed messages: 1797, total_bytes: 3830368 (60.43562 seconds)
INFO OutputProcess::DevoSender(standard_senders,devo_sender_0) -> Consumed messages: 1797 messages (60.436958 seconds) => 29 msg/sec
INFO OutputProcess::OutputStandardConsumer(standard_senders_consumer_0) -> Consumed messages: 1652, total_bytes: 3555837 (60.311803 seconds)
INFO OutputProcess::DevoSender(standard_senders,devo_sender_0) -> Consumed messages: 1652 messages (60.313064 seconds) => 27 msg/sec
INFO OutputProcess::OutputStandardConsumer(standard_senders_consumer_0) -> Consumed messages: 1949, total_bytes: 4277470 (60.187779 seconds)
INFO OutputProcess::DevoSender(standard_senders,devo_sender_0) -> Consumed messages: 1949 messages (60.187248 seconds) => 32 msg/sec
...
After a successful collector’s execution (this is, no error logs were found), you should be able to see the following log message:
Info
The @devo_pulling_id value is injected into each event to allow grouping all events ingested by the same pull action. You can use it to get the exact events downloaded on that Pull action in Loxcope.
Metrics (All metrics)
This service could be considered a general AWS metric puller. It reads metrics from all the AWS services that generate them. Those metrics are also managed by Cloudwatch.
This service makes use of the AWS API to get the data.
Expand
title
Devo categorization and destination
All the events are going to be ingested into the table cloud.aws.cloudwatch.metrics.
Expand
title
Verify data collection
Once the collector has been launched, it is important to check if the ingestion is performed in a proper way. To do so, go to the collector’s logs console.
This service has the following components:
Component
Description
Setup
The setup module is in charge of authenticating the service and managing the token expiration when needed.
Puller
The setup module is in charge of pulling the data in a organized way and delivering the events via SDK.
Setup output
A successful
time: 1.182 seconds. Waiting for 22.552 second(s) until the next one
After a successful collector’s execution (this is, no error logs were found), you should be able to see the following log message:
Info
The @devo_pulling_id value is injected into each event to allow grouping all events ingested by the same pull action. You can use it to get the exact events downloaded on that Pull action in the Data Search area of Devo.
Expand
title
Restart the persistence
This collector does not use any kind of persistent storage.
AWS-GuardDuty (Via API)
This service reads GuardDuty events via API. This service is not scalable because of it use of GuardDuty APIs. We use this service for “low” data due to the API limitation, otherwise we should use AWS_SQS_IF
This service makes use of the AWS API to get the data only if data is low
The events are going to be ingested into the table cloud.aws.guardduty.findings
Expand
title
Verify data collection
Once the collector has been launched, it is important to check if the ingestion is performed in a proper way. To do so, go to the collector’s logs console.
This service has the following components:
Component
Description
Setup
The setup module is in charge of authenticating the service and managing the token expiration when needed.
Puller
The setup module is in charge of pulling the data in a organized way and delivering the events via SDK.
Setup output
A successful run has the following output messages for the setup module:
Code Block
2024-052024-05-31T18:40:27.031 INFO InputProcess::AwsGuardDutyApiPullerSetup(aws,aws#121214,aws-guardduty#predefined,ap-southeast-1) -> Session cannot expire. Using user/profile authentication.
2024-05-31T18:40:28.486 INFO InputProcess::AwsGuardDutyApiPullerSetup(aws,aws#121214,aws-guardduty#predefined,ap-southeast-1) -> Creating user session
2024-05-31T18:40:28.487 INFO InputProcess::AwsGuardDutyApiPullerSetup(aws,aws#121214,aws-guardduty#predefined,ap-southeast-1) -> New AWS session started.
2024-05-31T18:40:29.779 INFO InputProcess::AwsGuardDutyApiPullerSetup(aws,aws#121214,aws-guardduty#predefined,ap-southeast-1) -> Setup for module <AwsGuardDutyApiPuller> has been successfully executed
Puller output
A successful initial run has the following output messages for the setup puller module:
Info
Note that the PrePull action is executed only one time before the first run of the Pull action.
Code Block
2024-05-31T13:12:08.535895316Z 2024-05-31T13:12:08.535 INFO InputProcess::AwsCloudwatchMetricPullerSetupAwsGuardDutyApiPuller(aws,aws#123,ec2#predefined,us-east-21023001,aws-guardduty,predefined,ap-southeast-1) -> SessionStartingcannotdataexpire.collectionUsingeveryuser/profile authentication.
INFO InputProcess::AwsCloudwatchMetricPullerSetup(aws,aws#123,ec2#predefined,us-east-2) -> Creating user session
INFO InputProcess::AwsCloudwatchMetricPullerSetup(aws,aws#123,ec2#predefined,us-east-260 seconds
2024-05-31T13:12:11.648725368Z 2024-05-31T13:12:11.648 INFO OutputProcess::DevoSender(standard_senders,devo_sender_0) -> NewCreatedAWSasession started.
INFO InputProcess::AwsCloudwatchMetricPullerSetup(aws,aws#123,ec2#predefined,us-east-2) -> Setup for module <AwsCloudwatchMetricPuller> has been successfully executed
Puller output
A successful initial run has the following output messages for the puller module:
Info
Note that the PrePull action is executed only one time before the first run of the Pull action.
Code Block
INFO InputProcess::AwsCloudwatchMetricPuller(aws,123,ec2,predefined,us-east-2) -> Starting data collection every 60 seconds
INFO InputProcess::AwsCloudwatchMetricPuller(aws,123,ec2,predefined,us-east-2) -> Starting a new pulling from "['AWS/EC2', 'AWS/EC2Spot']" namespaces at "2022-09-23T14:49:36.266007+00:00"
INFO InputProcess::AwsCloudwatchMetricPuller(aws,123,ec2,predefined,us-east-2) -> Time range: "2022-09-23T14:48:00Z" > "2022-09-23T14:49:00Z"
INFO InputProcess::AwsCloudwatchMetricPuller(aws,123,ec2,predefined,us-east-2) -> Received 3 response(s), generated 17 message(s), tag used: "cloud.aws.cloudwatch.metrics.936082584952.us-east-2.1", avg_time_per_source_message: 393.845 ms
INFO InputProcess::AwsCloudwatchMetricPuller(aws,123,ec2,predefined,us-east-2) -> Applied an offset to wait, retrieval_offset: -36.266007 seconds
INFO InputProcess::AwsCloudwatchMetricPuller(aws,123,ec2,predefined,us-east-2sender: {"name": "DevoSender(standard_senders,devo_sender_0)", "url": "collector-eu.devo.io:443", "chain_path": "/etc/devo/keys/ca.d/chain.crt", "cert_path": "/etc/devo/keys/devo.crt", "key_path": "/etc/devo/keys/devo.key", "transport_layer_type": "SSL", "last_usage_timestamp": null, "socket_status": null}, hostname: "collector-723d1da04d3a87c4-6d89bfcf9-6xzms", session_id: "139892643371760"
2024-05-31T13:12:22.238716958Z 2024-05-31T13:12:22.229 INFO InputProcess::AwsGuardDutyApiPuller(aws,1023001,aws-guardduty,predefined,ap-southeast-1) -> Total event received: 584, and sent: 584
2024-05-31T13:12:22.238732449Z 2024-05-31T13:12:22.230 INFO InputProcess::AwsGuardDutyApiPuller(aws,1023001,aws-guardduty,predefined,ap-southeast-1) -> Data collection completed. Elapsed time: 13.697 seconds. Waiting for 46.303 second(s) until the next on
After a successful collector’s execution (this is, no error logs were found), you should be able to see the following log message:
Code Block
2024-05-31T13:12:22.238716958Z 2024-05-31T13:12:22.229 INFO InputProcess::AwsGuardDutyApiPuller(aws,1023001,aws-guardduty,predefined,ap-southeast-1) -> Total event received: 584, and sent: 584
2024-05-31T13:12:22.238732449Z 2024-05-31T13:12:22.230 INFO InputProcess::AwsGuardDutyApiPuller(aws,1023001,aws-guardduty,predefined,ap-southeast-1) -> Data collection completed. Elapsed time: 113.182697 seconds. Waiting for 2246.552303 second(s) until the next one
After a successful collector’s execution (this is, no error logs were found), you should be able to see the following log message:
on
Info
The @devo_pulling_id value is injected into each event to allow grouping all events ingested by the same pull action. You can use it to get the exact events downloaded on that Pull action in Loxcopethe Data Search area of Devo.
Non Cloudwatch Logs
This service reads logs from some AWS services, but those logs are not managed by Cloudwatch. These logs are stored in an S3 bucket and read through an SQS queue, so it is using an S3+SQS pipeline.
...
Expand
title
Verify data collection
Once the collector has been launched, it is important to check if the ingestion is performed in a proper way. To do so, go to the collector’s logs console.
This service has the following components:
Component
Description
Setup
The setup module is in charge of authenticating the service and managing the token expiration when needed.
Puller
The setup module is in charge of pulling the data in a organized way and delivering the events via SDK.
Setup output
A successful run has the following output messages for the setup module:
Code Block
INFO InputProcess::AwsSqsS3GenericPullerSetup(aws,aws#123,non-cloudwatch-logs#predefined,us-east-2) -> Session cannot expire. Using user/profile authentication.
INFO InputProcess::AwsSqsS3GenericPullerSetup(aws,aws#123,non-cloudwatch-logs#predefined,us-east-2) -> Creating user session
INFO InputProcess::AwsSqsS3GenericPullerSetup(aws,aws#123,non-cloudwatch-logs#predefined,us-east-2) -> New AWS session started.
Puller output
A successful initial run has the following output messages for the puller module:
Info
Note that the PrePull action is executed only one time before the first run of the Pull action.
Code Block
INFO InputProcess::AwsSqsS3GenericPullerSetup(aws,aws#123,non-cloudwatch-logs#predefined,us-east-2) -> Setup for module <AwsSqsS3VpcFlowlogsPuller> has been successfully executed
INFO InputProcess::AwsSqsS3VpcFlowlogsPuller(aws,123,non-cloudwatch-logs,predefined,us-east-2) -> Starting data collection every 60 seconds
INFO InputProcess::AwsSqsS3VpcFlowlogsPuller(aws,123,non-cloudwatch-logs,predefined,us-east-2) -> Received 2 response(s), messages (fromSQS/generated): 0/0, discarded files: 0, avg_time_per_source_message: 169.711 ms
INFO InputProcess::AwsSqsS3VpcFlowlogsPuller(aws,123,non-cloudwatch-logs,predefined,us-east-2) -> Data collection completed. Elapsed time: 0.340 seconds. Waiting for 59.660 second(s) until the next one
After a successful collector’s execution (that is, no error logs found), you will see the following log message:
Code Block
INFO InputProcess::AwsSqsS3VpcFlowlogsPuller(aws,123,non-cloudwatch-logs,predefined,us-east-2) -> Received 2 response(s), messages (fromSQS/generated): 0/0, discarded files: 0, avg_time_per_source_message: 169.711 ms
Info
The value @devo_pulling_id is injected in each event to group all events ingested by the same pull action. You can use it to get the exact events downloaded in that Pull action in Devo’s search window.
Expand
title
Restart the persistence
This collector does not use any kind of persistent storage.
Custom Logs
This service reads logs from some AWS services and these logs are managed by Cloudwatch. Cloudwatch creates log groups to store the different log sources, so it is required to use a custom puller in order to read from different log groups at the same time. This service makes use of the AWS API to get the data.
...
Expand
title
Verify data collection
Once the collector has been launched, it is important to check if the ingestion is performed in a proper way. To do so, go to the collector’s logs console.
This service has the following components:
Component
Description
Setup
The setup module is in charge of authenticating the service and managing the token expiration when needed.
Puller
The setup module is in charge of pulling the data in a organized way and delivering the events via SDK.
Setup output
A successful run has the following output messages for the setup module:
Code Block
INFO InputProcess::AwsCloudwatchLogsPullerSetup(aws,aws#123,cwl_1#custom,us-east-2) -> Session cannot expire. Using user/profile authentication.
INFO InputProcess::AwsCloudwatchLogsPullerSetup(aws,aws#123,cwl_1#custom,us-east-2) -> Creating user session
INFO InputProcess::AwsCloudwatchLogsPullerSetup(aws,aws#123,cwl_1#custom,us-east-2) -> New AWS session started.
INFO InputProcess::AwsCloudwatchLogsPullerSetup(aws,aws#123,cwl_1#custom,us-east-2) -> Setup for module <AwsCloudwatchLogsPuller> has been successfully executed
Puller output
A successful initial run has the following output messages for the puller module:
Info
Note that the PrePull action is executed only one time before the first run of the Pull action.
Code Block
INFO InputProcess::AwsCloudwatchLogsPuller(aws,123,cwl_1,custom,us-east-2) -> Starting data collection every 60 seconds
INFO InputProcess::AwsCloudwatchLogsPuller(aws,123,cwl_1,custom,us-east-2) -> Starting a new pulling from "/aws/events/devo-cloudwatch-test-1" at "2022-09-23T15:08:18.132865+00:00"
INFO InputProcess::AwsCloudwatchLogsPuller(aws,123,cwl_1,custom,us-east-2) -> Optimized first retrieval approach for high number of log streams with medium size size
Expand
title
Restart the persistence
This collector does not use any kind of persistent storage.
Cisco Umbrella (via S3+SQS)
This service reads logs from a Cisco Umbrella managed bucket via the S3+SQS pipeline. Cisco provides a way to deposit logging data into a S3 bucket.
Expand
title
Devo categorization and destination
There are three types of events: dnslogs, iplogs and proxylogs. Cisco stores them in different paths depending on the event type. The collector will ingest them towards the following tables:
dnslogs: sig.cisco.umbrella.dns
iplogs: sig.cisco.umbrella.ip
proxylogs: sig.cisco.umbrella.proxy
In case Cisco starts sending other type of events to s3, they will go to: sig.cisco.umbrella.unknown
Expand
title
Restart the persistence
This collector does not use any kind of persistent storage.
Collector operations
This section is intended to explain how to proceed with the specific operations of this collector.
...
Expand
title
Troubleshooting
This collector has different security layers that detect both an invalid configuration and abnormal operation. This table will help you detect and resolve the most common errors.
ErrorTypeError type
Error IdID
Error Messagemessage
Cause
Solution
AwsModuleDefinitionError
1
"{module_properties_key_path}" mandatory property is missing or empty
module_properties is not present in collector definitions file.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
2
"{module_properties_key_path}" property must be a dictionary
module_properties is not a dictionary type data structure.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
3
"{module_properties_key_path}.tag_base" mandatory property is missing or empty
tag_base is not present in collector definitions file.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
4
"{module_properties_key_path}.tag_base" property must be a string
tag_base is not of type string.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
5
module_properties_key_path}.tag_base" property must have {event_type}, {account_id}, {region_id} and {format_version} placeholders
tag_base does not literally contain all those placeholders, and they are required.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
6
"{module_properties_key_path}.tag_base" property is containing some unexpected placeholders, the allowed ones are: ["event_type", "account_id", "region_id", "format_version", "environment", "service_name"]
tag_base has an unauthorized placeholder or is not correctly built.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
7
"{module_properties_key_path}.event_type_default" mandatory property is missing or empty
event_type_default is not present in the collector definitions file.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
8
"{module_properties_key_path}.event_type_default" property must be a string
event_type_default is not of type string.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
26
"{module_properties_key_path}.event_type_source_field_name" mandatory property is missing or empty
event_type_source_field_name is not present in the collector definitions file.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
27
"{module_properties_key_path}.event_type_source_field_name" property must be a boolean
event_type_source_field_name is not of type boolean.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
26
"{module_properties_key_path}.event_type_extracting_regex" mandatory property is missing or empty
event_type_extracting_regex is not present in the collector definitions file.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
27
"{module_properties_key_path}.event_type_extracting_regex" property must be a boolean
event_type_extracting_regex is not of type boolean.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
5
"{module_properties_key_path}.event_type_extracting_regex" property is not a valid regular expression
event_type_extracting_regex is not a valid Regular Expression.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
7
"{module_properties_key_path}.enable_auto_event_type" mandatory property is missing or empty
enable_auto_event_type is not present in the collector definitions file.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
8
"{module_properties_key_path}.enable_auto_event_type" property must be a string
enable_auto_event_type is not of type string.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
9
"{module_properties_key_path}.enable_auto_event_type_config_key" mandatory property is missing or empty
enable_auto_event_type_config_key is not present in the collector definitions file.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
10
"{module_properties_key_path}.enable_auto_event_type_config_key" property must be a string
enable_auto_event_type_config_key is not of type string.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
31
"{module_properties_key_path}.event_type_processor_mapping" property should be a dictionary.
event_type_processor_mapping is not of type dictionary.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
32
"{module_properties_key_path}.event_type_processor_mapping" exists but it is empty.
event_type_processor_mapping cannot be empty, and it is.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
33
"{module_properties_key_path}.event_type_processor_mapping.{processor_name}.processor_class" mandatory property is missing or empty
processor_class does not exist or is empty.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
34
"{module_properties_key_path}.event_type_processor_mapping.{processor_name}.processor_class" should be a string
processor_class is not of type string.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
35
"{module_properties_key_path}.event_type_processor_mapping.{processor_name}.tagging" mandatory property is missing or empty
tagging is not present in the collector definitions file.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
36
"{module_properties_key_path}.event_type_processor_mapping.{processor_name}.tagging" should be a string
tagging is not of type string.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
10
"{module_properties_key_path}.sqs_queue_custom_name_key" mandatory property is missing or empty
sqs_queue_custom_name_key is not present in the collector definitions file.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
11
"{module_properties_key_path}.sqs_queue_custom_name_key" property must be a string
sqs_queue_custom_name_key is not of type string.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
12
"{module_properties_key_path}.sqs_queue_required_default_name" property must be a boolean
sqs_queue_required_default_name is not of type boolean.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
13
"{module_properties_key_path}.sqs_queue_default_name" mandatory property is missing or empty
sqs_queue_default_name is not present in the collector definitions file.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
14
"{module_properties_key_path}.sqs_queue_default_name" property must be a string
sqs_queue_default_name is not of type string.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
15
"{module_properties_key_path}.sqs_queue_default_name" property must have {input_id} placeholder
sqs_queue_default_name does not have the required {input_id} placeholder.
This is an internal issue. Please, contact with Devo Support team.
AwsInputConfigurationError
1
"{input_config_key_path}" mandatory property is missing or empty
The inputs data structure is missing or empty in the configuration file.
Go to the “Running the collector on…“ section in the documentation and check the data structure required for the inputs.
AwsInputConfigurationError
2
"{input_config_key_path}" property must be a dictionary
The inputs data structure is not of type dictionary.
Go to the “Running the collector on…“ section in the documentation and check the data structure required for the inputs.
AwsServiceConfigurationError
1
"{service_config_key_path}" mandatory property is missing or empty
The services data structure is missing or empty in the configuration file.
Go to the “Running the collector on…“ section in the documentation and check the data structure required for the services.
AwsServiceConfigurationError
2
"{service_config_key_path}" property must be a dictionary
The services data structure is not of type dictionary.
Go to the “Running the collector on…“ section in the documentation and check the data structure required for the services.
AwsServiceConfigurationError
3
"{service_config_key_path}.tag" property must be a string
tag is not of type string.
Change the tag parameter to be a string.
AwsServiceConfigurationError
4
"{service_config_key_path}.{sqs_queue_custom_name_key}" mandatory property is missing or empty
The parameter indicated in the error message is not present in the configuration file.
Add the indicated parameter to the configuration file.
AwsServiceConfigurationError
5
"{service_config_key_path}.{sqs_queue_custom_name_key}" property must be a string
The parameter indicated in the error message is not of type string.
Make the indicated parameter be a string.
AwsServiceConfigurationError
6
"{service_config_key_path}.{sqs_queue_custom_name_key}" property must be a string
The parameter indicated in the error message is not of type string.
Make the indicated parameter be a string.
AwsServiceConfigurationError
7
"{service_config_key_path}.{enable_auto_event_type_config_key}" property must be a string
The parameter indicated in the error message is not of type string.
Make the indicated parameter be a string.
Common for all the services using the S3+SQS pipeline
ErrorTypeError type
Error IdID
Error Messagemessage
Cause
Solution
AwsModuleDefinitionError
1
"{module_properties_key_path}" mandatory property is missing or empty
module_properties is not present in collector definitions file.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
2
"{module_properties_key_path}" property must be a dictionary
module_properties is not a dictionary type data structure.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
10
"{module_properties_path}.start_time_regex" mandatory property is missing or empty
start_time_regex is not present at collector definition file.
This is an internal issue. Please, contact with Devo Support tea
AwsModuleDefinitionError
11
"{module_properties_path}.start_time_regex" property must be a string
start_time_regex is of a type other than string.
This is an internal issue. Please, contact with Devo Support tea
AwsModuleDefinitionError
12
"{module_properties_path}.start_time_regex" property is not a valid regular expression
start_time_regex is not a valid Regular Expression.
This is an internal issue. Please, contact with Devo Support tea
AwsModuleDefinitionError
21
"{sqs_s3_processor_properties_key_path}" mandatory property is missing or empty
The parameter indicated in the error message is not present in collector definition file.
This is an internal issue. Please, contact with Devo Support tea
AwsModuleDefinitionError
22
"{sqs_s3_processor_properties_key_path}" property must be a dictionary
The parameter indicated in the error message is not of type dictionary.
This is an internal issue. Please, contact with Devo Support tea
AwsModuleDefinitionError
26
"{sqs_s3_processor_properties_key_path}.class_name" mandatory property is missing or empty
class_name is empty or is not present in the collector definition file.
This is an internal issue. Please, contact with Devo Support tea
AwsModuleDefinitionError
27
"{sqs_s3_processor_properties_key_path}.class_name" property must be a string
class_name is not of type string.
This is an internal issue. Please, contact with Devo Support tea
AwsModuleDefinitionError
10
"{module_properties_key_path}.sqs_queue_custom_name_key" mandatory property is missing or empty
sqs_queue_custom_name_key is not present in the collector definitions file.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
11
"{module_properties_key_path}.sqs_queue_custom_name_key" property must be a string
sqs_queue_custom_name_key is not of type string.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
12
"{module_properties_key_path}.sqs_queue_required_default_name" property must be a boolean
sqs_queue_required_default_name is not of type boolean.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
13
"{module_properties_key_path}.sqs_queue_default_name" mandatory property is missing or empty
sqs_queue_default_name is not present in the collector definitions file.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
14
"{module_properties_key_path}.sqs_queue_default_name" property must be a string
sqs_queue_default_name is not of type string.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
15
"{module_properties_key_path}.sqs_queue_default_name" property must have {input_id} placeholder
sqs_queue_default_name does not have the required {input_id} placeholder.
This is an internal issue. Please, contact with Devo Support team.
AwsInputConfigurationError
1
"{input_config_key_path}" mandatory property is missing or empty
The inputs data structure is missing or empty in the configuration file.
Go to the “Running the collector on…“ section in the documentation and check the data structure required for the inputs.
AwsInputConfigurationError
2
"{input_config_key_path}" property must be a dictionary
The inputs data structure is not of type dictionary.
Go to the “Running the collector on…“ section in the documentation and check the data structure required for the inputs.
AwsServiceConfigurationError
1
"{service_config_key_path}" mandatory property is missing or empty
The services data structure is missing or empty in the configuration file.
Go to the “Running the collector on…“ section in the documentation and check the data structure required for the services.
AwsServiceConfigurationError
2
"{service_config_key_path}" property must be a dictionary
The services data structure is not of type dictionary.
Go to the “Running the collector on…“ section in the documentation and check the data structure required for the services.
AwsServiceConfigurationError
3
"{service_config_key_path}.tag" property must be a string
tag is not of type string.
Change the tag parameter to be a string.
AwsServiceConfigurationError
4
"{service_config_key_path}.{sqs_queue_custom_name_key}" mandatory property is missing or empty
The parameter indicated in the error message is not present in the configuration file.
Add the indicated parameter to the configuration file.
AwsServiceConfigurationError
5
"{service_config_key_path}.{sqs_queue_custom_name_key}" property must be a string
The parameter indicated in the error message is not of type string.
Make the indicated parameter be a string.
AwsServiceConfigurationError
6
"{service_config_key_path}.{sqs_queue_custom_name_key}" property must be a string
The parameter indicated in the error message is not of type string.
Make the indicated parameter be a string.
AwsServiceConfigurationError
7
"{service_config_key_path}.{enable_auto_event_type_config_key}" property must be a string
The parameter indicated in the error message is not of type string.
Make the indicated parameter be a string.
AwsServiceConfigurationError
7
"{service_config_key_path}.start_time" property must be a string
start_time is not of type string.
Change the start_time parameter to be a string.
AwsServiceConfigurationError
8
The property "{service_config_key_path}.start_time" from configuration is having a wrong format, expected pattern: "{start_time_regex}"
start_time parameter does not match the Regular Expression.
Change the start_time parameter to march the Regular Expression.
AwsQueueException
0
Queue "{sqs_queue_name}" used by service "{service_name}" in "{submodule_config}" region is not available: reason: {reason}
The queue indicated in the error message is not available.
Check the next things:
Name of the queue.
The queue exists in the indicated region.
Read carefully the reason the error message is returning.
Audit (via API)
ErrorTypeError type
Error IdID
Error Messagemessage
Cause
Solution
AwsModuleDefinitionError
1
"{module_properties_key_path}" mandatory property is missing or empty
module_properties is not present in collector definitions file.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
2
"{module_properties_key_path}" property must be a dictionary
module_properties is not a dictionary type data structure.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
3
"{module_properties_key_path}.tag_base" mandatory property is missing or empty
tag_base is not present in collector definitions file.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
4
"{module_properties_key_path}.tag_base" property must be a string
tag_base is not of type string.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
5
module_properties_key_path}.tag_base" property must have {event_type}, {account_id}, {region_id} and {format_version} placeholders
tag_base does not literally contain all those placeholders, and they are required.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
6
"{module_properties_key_path}.tag_base" property is containing some unexpected placeholders, the allowed ones are: ["event_type", "account_id", "region_id", "format_version", "environment", "service_name"]
tag_base has an unauthorized placeholder or is not correctly built.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
7
"{module_properties_key_path}.event_type_default" mandatory property is missing or empty
event_type_default is not present in the collector definitions file.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
8
"{module_properties_key_path}.event_type_default" property must be a string
event_type_default is not of type string.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
9
"{module_properties_key_path}.enable_auto_event_type" mandatory property is missing or empty
enable_auto_event_type is not present in the collector definitions file.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
10
"{module_properties_key_path}.enable_auto_event_type" property must be a string
enable_auto_event_type is not of type string.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
11
"{module_properties_key_path}.enable_auto_event_type_config_key" mandatory property is missing or empty
enable_auto_event_type_config_key is empty or is not present in the collector definitions file.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
12
"{module_properties_key_path}.enable_auto_event_type_config_key" property must be a string
enable_auto_event_type_config_key is not of type string.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
13
"{module_properties_key_path}.start_time_regex" mandatory property is missing or empty
start_time_regex is empty or is not present in the collector definition file.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
14
"{module_properties_key_path}.start_time_regex" property must be a string
start_time_regex is not of type string.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
15
"{module_properties_key_path}.start_time_regex" property is not a valid regular expression
start_time_regex is using a invalid Regular Expression.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
16
"{module_properties_key_path}.gap_until_now_in_minutes" mandatory property is missing or empty
gap_until_now_in_minutes is empty or not present in the collector definitions file.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
17
"{module_properties_key_path}.gap_until_now_in_minutes" property must be a string
gap_until_now_in_minutes is not of type integer.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
18
"{module_properties_key_path}.gap_until_now_in_minutes" property can not be a negative value
gap_until_now_in_minutes is having a negative value, which is not allowed.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
19
"{module_properties_key_path}.time_slot_in_hours" mandatory property is missing or empty
time_slot_in_hours is empty or is not present in the collector definitions file.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
20
"{module_properties_key_path}.time_slot_in_hours" property must be an integer
time_slot_in_hours is not of type integer.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
21
"{module_properties_key_path}.time_slot_in_hours" property can not be 0 or a negative value
time_slot_in_hours is zero or has a negative value, which is not allowed.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
22
"{module_properties_key_path}.sources" mandatory property is missing or empty
sources is empty or is not present in the collector definitions file.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
23
"{module_properties_key_path}.sources" property exists but with wrong format, only "str" or "list" values are allowed
sources is not of type string or list.
This is an internal issue. Please, contact with Devo Support team.
AwsInputConfigurationError
1
"{input_config_key_path}" mandatory property is missing or empty
The inputs data structure is missing or empty in the configuration file.
Go to the “Running the collector on…“ section in the documentation and check the data structure required for the inputs.
AwsInputConfigurationError
2
"{input_config_key_path}" property must be a dictionary
The inputs data structure is not of type dictionary.
Go to the “Running the collector on…“ section in the documentation and check the data structure required for the inputs.
AwsServiceConfigurationError
1
"{service_config_key_path}" mandatory property is missing or empty
The services data structure is missing or empty in the configuration file.
Go to the “Running the collector on…“ section in the documentation and check the data structure required for the services.
AwsServiceConfigurationError
2
"{service_config_key_path}" property must be a dictionary
The services data structure is not of type dictionary.
Go to the “Running the collector on…“ section in the documentation and check the data structure required for the services.
AwsServiceConfigurationError
3
"{service_config_key_path}.tag" property must be a string
tag is not of type string.
Change the tag parameter to be a string.
AwsServiceConfigurationError
4
"{service_config_key_path}.sources" property exists but with wrong format, only "str" or "list" values are allowed
sources is not of type string or list.
Change the sources parameter to be a string or a list.
AwsServiceConfigurationError
5
"{service_config_key_path}.gap_until_now_in_minutes" property must be an integer
gap_until_now_in_minutes is not of type integer.
Change the gap_until_now_in_minutes parameter to be an integer.
AwsServiceConfigurationError
6
"{service_config_key_path}.start_time" property must be a string
start_time is not of type string.
Change the start_time parameter to be a string.
AwsServiceConfigurationError
7
The property "{service_config_key_path}.start_time" from configuration is having a wrong format, expected pattern: "{start_time_regex}"
start_time is using an invalid Regular Expression.
Change the start_time parameter to match the Regular Expression indicated in the error message.
AwsServiceConfigurationError
8
"{service_config_key_path}.drop_event_names" property must be a list
drop_event_names is not of type list.
Change the drop_event_names parameter to be a list.
AwsServiceConfigurationError
9
"{service_config_key_path}.{enable_auto_event_type_config_key}" property must be a string
The parameter indicated by the error message is not of type string.
Change the parameter indicated by the error message to be a string.
AwsServiceConfigurationError
10
"{service_config_key_path}.time_slot_in_hours" property must be integer
time_slot_in_hours is not of type integer.
Change the time_slot_in_hours parameter to be an integer.
Custom Logs
ErrorTypeError type
Error IdID
Error Messagemessage
Cause
Solution
AwsInputConfigurationError
0
Mandatory property "requests_per_second" is missing (located at: aws.request_per_second)
requests_per_second is not present at configuration file.
Add requests_per_second to the configuration file.
AwsModuleDefinitionError
1
"{module_properties_key_path}" mandatory property is missing or empty
module_properties is not present in collector definitions file.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
2
"{module_properties_key_path}" property must be a dictionary
module_properties is not a dictionary type data structure.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
3
"{module_properties_key_path}.tag_base" mandatory property is missing or empty
tag_base is not present in collector definitions file.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
4
"{module_properties_key_path}.tag_base" property must be a string
tag_base is not of type string.
This is an internal issue. Please, contact with Devo Support team.
AwsServiceDefinitionException
5
module_properties_key_path}.tag_base" property must have {event_type}, {account_id}, {region_id} and {format_version} placeholders
tag_base does not literally contain all those placeholders, and they are required.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
6
"{module_properties_key_path}.tag_base" property is containing some unexpected placeholders, the allowed ones are: ["event_type", "account_id", "region_id", "format_version", "environment", "service_name"]
tag_base has an unauthorized placeholder or is not correctly built.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
7
"{module_properties_key_path}.start_time_regex" mandatory property is missing or empty
start_time_regex is not present or is empty in the collector definitions file.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
8
"{module_properties_key_path}.start_time_regex" property must be a string
start_time_regex is not of type string.
This is an internal issue. Please, contact with Devo Support team.
AwsModuleDefinitionError
9
"{module_properties_key_path}.start_time_regex" property is not a valid regular expression
start_time_regex is not a valid Regular Expression.
This is an internal issue. Please, contact with Devo Support team.
AwsInputConfigurationError
1
"{input_config_key_path}" mandatory property is missing or empty
The inputs data structure is missing or empty in the configuration file.
Go to the “Running the collector on…“ section in the documentation and check the data structure required for the inputs.
AwsInputConfigurationError
2
"{input_config_key_path}" property must be a dictionary
The inputs data structure is not of type dictionary.
Go to the “Running the collector on…“ section in the documentation and check the data structure required for the inputs.
AwsServiceConfigurationError
1
"{service_config_key_path}" mandatory property is missing or empty
The services data structure is missing or empty in the configuration file.
Go to the “Running the collector on…“ section in the documentation and check the data structure required for the services.
AwsServiceConfigurationError
2
"{service_config_key_path}" property must be a dictionary
The services data structure is not of type dictionary.
Go to the “Running the collector on…“ section in the documentation and check the data structure required for the services.
AwsServiceConfigurationError
3
"{service_config_key_path}.tag" property must be a string
tag is not of type string.
Change the tag parameter to be a string.
AwsServiceConfigurationError
43
"{service_config_key_path}.use_first_optimized_retrieval" property must be a boolean
use_first_optimized_retrieval is not of type boolean.
Change the use_first_optimized_retrieval parameter to be a boolean.
AwsServiceConfigurationError
1
"{service_config_key_path}.log_group" mandatory property is missing or empty
log_group is empty or is not present in the configuration file.
Add the log_group parameter to the configuration file.
AwsServiceConfigurationError
1
"{service_config_key_path}.log_group" property must be a string
log_group is not of type string.
Change the log_group parameter to be a string.
AwsServiceConfigurationError
2
"{service_config_key_path}.start_time" property must be a string
start_time is not of type string.
Change the start_time parameter to be a string.
AwsServiceConfigurationError
1
The property "{service_config_key_path}.start_time" from configuration is having a wrong format, expected pattern: "{start_time_regex}"
start_time is not matching the pattern indicated in the error message.
Make start_time match the pattern indicated in the error message.
Metrics
ErrorTypeError type
Error IdID
Error Messagemessage
Cause
Solution
AwsServiceDefinitionException
1
"{module_properties_key_path}" mandatory property is missing or empty
module_properties is not present in collector definitions file.
This is an internal issue. Please, contact with Devo Support team.
AwsServiceDefinitionException
2
"{module_properties_key_path}" property must be a dictionary
module_properties is not a dictionary type data structure.
This is an internal issue. Please, contact with Devo Support team.
AwsServiceDefinitionException
5
"{module_properties_key_path}.tag_base" mandatory property is missing or empty
tag_base is not present in collector definitions file.
This is an internal issue. Please, contact with Devo Support team.
AwsServiceDefinitionException
6
"{module_properties_key_path}.tag_base" property must be a string
tag_base is not of type string.
This is an internal issue. Please, contact with Devo Support team.
AwsServiceDefinitionException
1
"{module_properties_key_path}.metric_namespace" mandatory property is missing or empty
metric_namespace is empty or is not present in collector definitions.
This is an internal issue. Please, contact with Devo Support team.
AwsServiceDefinitionException
1
"{module_properties_key_path}.metric_namespace" property must be a list
metric_namespace is not of type list.
This is an internal issue. Please, contact with Devo Support team.
AwsInputConfigurationError
1
"{input_config_key_path}" mandatory property is missing or empty
The inputs data structure is missing or empty in the configuration file.
Go to the “Running the collector on…“ section in the documentation and check the data structure required for the inputs.
AwsInputConfigurationError
1
"{inputinput_config_key_path}" property must be a dictionary
The inputs data structure is not of type dictionary.
Go to the “Running the collector on…“ section in the documentation and check the data structure required for the inputs.
AwsServiceConfigurationError
1
"{service_config_key_path}" mandatory property is missing or empty
The services data structure is missing or empty in the configuration file.
Go to the “Running the collector on…“ section in the documentation and check the data structure required for the services.
AwsServiceConfigurationError
8
"{service_config_key_path}" property must be a dictionary
The inputsservicesdata structure is not of type dictionary.
Go to the “Running the collector on…“ section in the documentation and check the data structure required for the inputsservices.
AwsServiceConfigurationError
18
"{service_config_key_path}" mandatory property is missing or empty
The services data structure is missing or empty in the configuration file.
Go to the “Running the collector on…“ section in the documentation and check the data structure required for the services.
AwsServiceConfigurationError
8
"{service_config_key_path}" property must be a dictionary
The services data structure is not of type dictionary.
Go to the “Running the collector on…“ section in the documentation and check the data structure required for the services.
AwsServiceConfigurationError
8
"{service_config_key_path}.tag" property must be a string
tag is not of type string.
Change the tag parameter to be a string.
AwsServiceConfigurationError
0
Mandatory property "metric_namespaces" is missing
metric_namespaces is not present in configuration file.
Add metric_namespaces to the configuration file.
AwsServiceConfigurationError
0
Mandatory property "metric_namespaces" property must be a list
metric_namespaces is not of type list.
Change the metric_namespaces parameter to be a list.
AwsServiceConfigurationError
1
When a service uses "metrics" type, the property "request_period_in_seconds" must have one of the following values: 1, 5, 10, 30, 60, or any multiple of 60
request_period_in_seconds is using a value that is not allowed.
Change the request_period_in_seconds parameter to match one of this values: 1, 5, 10, 30, 60, or any multiple of 60.
AwsServiceConfigurationError
4
"start_time" property must be a string
start_time is not of type string.
Change the start_time parameter to be a string.
AwsServiceConfigurationError
4
The property "start_time" from configuration is having a wrong format, expected: YYYY-mm-ddTHH:MM:SSZ
start_time is using an incorrect format.
Change the start_time to match the format indicated in the error message.
Change log
...
Release
...
Released on
...
Release type
...
Details
.tag" property must be a string
tag is not of type string.
Change the tag parameter to be a string.
AwsServiceConfigurationError
0
Mandatory property "metric_namespaces" is missing
metric_namespaces is not present in configuration file.
Add metric_namespaces to the configuration file.
AwsServiceConfigurationError
0
Mandatory property "metric_namespaces" property must be a list
metric_namespaces is not of type list.
Change the metric_namespaces parameter to be a list.
AwsServiceConfigurationError
1
When a service uses "metrics" type, the property "request_period_in_seconds" must have one of the following values: 1, 5, 10, 30, 60, or any multiple of 60
request_period_in_seconds is using a value that is not allowed.
Change the request_period_in_seconds parameter to match one of this values: 1, 5, 10, 30, 60, or any multiple of 60.
AwsServiceConfigurationError
4
"start_time" property must be a string
start_time is not of type string.
Change the start_time parameter to be a string.
AwsServiceConfigurationError
4
The property "start_time" from configuration is having a wrong format, expected: YYYY-mm-ddTHH:MM:SSZ
start_time is using an incorrect format.
Change the start_time to match the format indicated in the error message.
Change log
Release
Released on
Release type
Details
Recommendations
v1.11.0
Status
colour
Green
title
IMPROVEMENT
Status
colour
Red
title
BUG FIX
Improvements
Updated DCSDK base docker image 1.3.1.
Added Unit tests and added user_guide
Upgraded Boto3 libraries from 1.34.97 to 1.35.92
Updated DCSDK from 1.11.1 to 1.13.1:
Added new sender for relay in house + TLS
Added persistence functionality for gzip sending buffer
Added Automatic activation of gzip sending
Improved behaviour when persistence fails
Upgraded DevoSDK dependency
Fixed console log encoding
Restructured python classes
Improved behaviour with non-utf8 characters
Decreased default size value for internal queues (Redis limitation, from 1GiB to 256MiB)
New persistence format/structure (compression in some cases)
Removed dmesg execution (It was invalid for docker execution)
Applied changes to make DCSDK compatible with MacOS
Upgrade DevoSDK dependency to version v5.4.0
Change internal queue management for protecting against OOMK
Extracted ModuleThread structure from PullerAbstract
Improve Controlled stop when both processes fails to instantiate
Improve Controlled stop when InputProcess is killed
Bug related to lost of collector_name , collector_id and job_id
Bug related queues and ValueError (edited)
Change internal queue management for protecting against OOMK
Extracted ModuleThread structure from PullerAbstract
Improve Controlled stop when both processes fails to instantiate
Improve Controlled stop when InputProcess is killed
Fixed error related a ValueError exception not well controlled
Fixed error related with loss of some values in internal mes
Bug Fix:
Changes in code to handle the guard-duty missing logs issue
sages
v1.10.0
Status
colour
Purple
title
NEW FEATURE
Improvements:
Implemented GuardDuty service, added puller set-up and puller for it
Upgrade
v1.8.2
Status
colour
Green
title
IMPROVEMENT
Improvements:
Upgraded DCSDK Docker base image updated to 1.2.0
Upgrade
v1.8.1
Status
colour
Red
title
BUG FIX
Bug Fixes:
Fix a bug when dealing with events that have no lastEventTimestamp present in the log_stream
Upgrade
v1.8.0
Status
colour
Green
title
IMPROVEMENT
Status
colour
Purple
title
NEW FEATURE
New Feature
Updated method to call all the log group name if log_group parameter is this '/' in the config
Improvements
Upgraded DCSDK from 1.9.2 to 1.10.2
Ensure special characters are properly sent to the platform
Changed log level to some messages from info to debug
Changed some wrong log messages
Upgraded some internal dependencies
Changed queue passed to setup instance constructor
Ability to validate collector setup and exit without pulling any data
Ability to store in the persistence the messages that couldn't be sent after the collector stopped
Ability to send messages from the persistence when the collector starts and before the puller begins working
Ensure special characters are properly sent to the platform
Upgrade
v1.7.1
Status
colour
Red
title
bug fixes
Fixed the way the collector handles milliseconds as the strptime function has been updated since 2021
Fixed the missing parameter in a method call
Recommended version
v1.6.0
Status
colour
Purple
title
NEW FEATURE
New features:
Added Cisco Umbrella new data source using SQS+S3
Added is_aws_service optional parameter in collector_definitions.yaml.
Added event_type_file_regex_patterns optional parameter to set a dict as: event_type -> regex_for_s3_file_key
Upgrade
v1.5.0
Status
colour
Green
title
IMPROVEMENT
Improvements
Upgraded [boto] libraries from 1.21.36 to 1.28.24
Upgraded DCSDK from 1.3.0 to 1.9.1
Upgrade
v1.4.1
Status
colour
Red
title
BUG FIX
Bug Fixes:
Fixed a bug that prevented the use of the Assumed Role authentication method.
Fixed a bug that prevented session renewal when using any of the Assume Authentication methods:
Assume Role
Cross Account
Upgrade
v1.4.0
Status
colour
Purple
title
NEW FEATURE
Status
colour
Green
title
IMPROVEMENT
Status
colour
Red
title
BUG FIX
New features:
CrossAccount authentication method is now available improving the way in which the credentials are shared when the collector is running in the Collector Service.
Improvements:
The audit-events-all service (type audits_api) has been enhanced to allow requesting events older than 500 days.
Bug Fixes:
Fixed a bug that raised a KeyError when the optional param event_type_processor_mapping was not defined running service-events-all service.