Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Table of Contents
maxLevel2
typeflat

Service description

...

You can use the AWS collector to retrieve data from the AWS APIs and send it to your Devo domain. Once the gathered information arrives at Devo, it will be processed and included in different tables in the associated Devo domain so users can analyze it.

Data source description

From the monitoring point of view, AWS generates the following types of information:

Expand
titleEvents

Information that happens at a specific timestamp. This information will always be linked to that timestamp, and can be categorized into two different subtypes:

Service events

The different available services usually generate information related to their internal behaviors, such as A virtual machine has been started, A new file has been created in an S3 bucket or An AWS lambda function has been invoked.

Note that this type of event can be triggered with no human interaction. These kinds of events are managed by the CloudWatch Events service (CWE). Recently, AWS has created a new service called Amazon EventBridge that will replace the CWE service.
The findings detected by AWS Security Hub are also managed by CloudWatch Events (CWE).

Audit events

These events are more specific because they need human interaction no matter the way used to retrieve them (API, web interaction, or even CLI command).

These events are managed by the CloudTrail service.

Expand
titleMetrics

According to the standard definition, this kind of information is usually generated at the exact moment it is requested because it is typically a query related to the status of a service (everything inside AWS is considered a service).

AWS makes something slightly different because it generates metrics information every N time slots, such as 1 min, 5 min, 30 min, 1h, etc., even if no one makes a request.

This kind of information is managed by the CloudWatch Metrics service (CWM).

Expand
titleLogs

Logs can be defined as information with a non-fixed structure that is sent to one of the available logging services. These services are CloudWatch Logs and S3.

There are some very customizable services, such as AWS Lambda, or even any developed application which is deployed inside an AWS virtual machine (EC2), that can generate custom log information. This kind of information is managed by the CloudWatch Logs service (CWL) and also by the S3 service.

There are also some other services that can generate logs with a fixed structure, such as VPC Flow Logs or CloudFront Logs. These kinds of services require one special way of collecting their data.

Info

Some services generate information that can be sent to different targets at the same time, for example, the CloudTrail service generates audit-related information. This information is really an "audit event", but it can be treated as a "simple event" and being sent to the Cloudwatch Events service. Also, it can be sent as a string “logline” to the Cloudwatch Logs service or sent as a file to a bucket inside the S3 service.

Note
  • CloudWatch Events is in the process of changing its name, the new one is Amazon EventBridge.

  • Almost all services that generate Service events usually send them to Cloudwatch Events service (CWE). It could be said that 90% of services use CloudWatch Events (the same service events are also sent to the new service called Amazon EventBridge).

  • Cloudwatch Events (CWE), Cloudwatch Metrics (CWM), and Cloudwatch Logs (CWL) are considered as different services.

Setup

Some manual actions are necessary in order to get all the required information/services and allow the Devo collector to gather the information from AWS.

The following sections describe how to get the required AWS credentials and how to proceed with the different required setups depending on the gathered information type.

...

Because there are several options about how to create the credentials, they will be detailed only in two different approaches.

There are several available options to define credentials, but we will only cover some of them.

It’s recommended to have available or create the following IAM policies before the creation of the IAM user that will be used for the AWS collector.

...

titlePolicy details

...

Source type

...

AWS Data Bus

...

Recommended policy name

...

Variant

...

 

...

Service events

...

CloudWatch Events

...

devo-cloudwatch-events

...

All resources

...

Tip

It’s not required the creation of any new policy due to there are not needed any permissions

...

Audit events

...

CloudTrail API

...

devo-cloudtrail-api

...

All resources

...

Code Block
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "cloudtrail:LookupEvents",
            "Resource": "*"
        }
    ]
}

...

 

 

...

Specific resource

...

Note

There is no way for limiting the accessed resources

...

CloudTrail SQS+S3

...

devo-cloudtrail-s3

...

All resources

...

Code Block
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "s3:GetObject",
            "Resource": "*"
        }
    ]
}

...

 

...

 

...

 

...

Specific S3 bucket

Info

Note that the value for the property called Resource should be changed with the proper value

Note

It very important the /* string at the end of each bucket name

 

...

Code Block
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "s3:GetObject",
            "Resource": [
                "arn:aws:s3:::devo-cloudtrail-storage-bucket1/*",
                "arn:aws:s3:::devo-cloudtrail-storage-bucket2/*"
            ]
        }
    ]
}

...

Metrics

...

CloudWatch Metrics

...

devo-cloudwatch-metrics

...

All resources

...

Code Block
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "cloudwatch:GetMetricData",
                "cloudwatch:ListMetrics"
            ],
            "Resource": "*"
        }
    ]
}

...

Specific resource

...

Note

There is no way for limiting the accessed resources

...

Logs

...

CloudWatch Logs

...

devo-cloudwatch-logs

...

All log groups

Info

Note that the value for property Resource should be adapted with the proper account id value.

...

Code Block
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "logs:DescribeLogGroups",
                "logs:DescribeLogStreams",
                "logs:FilterLogEvents"
            ],
            "Resource": "arn:aws:logs:*:936082584952:log-group:*"
        }
    ]
}

...

Specific log groups

Info

Note that values inside the Resources property are only examples and they should be changed with the proper values.

...

Code Block
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "logs:DescribeLogGroups",
                "logs:DescribeLogStreams",
                "logs:FilterLogEvents"
            ],
            "Resource": [
                "arn:aws:logs:*:936082584952:log-group:/aws/events/devo-cloudwatch-test-1:*",
                "arn:aws:logs:*:936082584952:log-group:/aws/events/devo-cloudwatch-test-2:*"
            ]
        }
    ]
}

...

Logs to S3 + SQS

...

devo-vpcflow-logs

...

All resources

...

Code Block
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "s3:GetObject",
            "Resource": "*"
        }
    ]
}

...

Specific resource

...

Code Block
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::vpc-flowlogs-test1/*"
        }
    ]
}
Expand
titleUsing a user account and local policies

Depending on which source types are collected, one or more of the policies described above will be used.

Once the required policies are created, each one must be associated with an IAM user. To create it, visit the AWS Console and log in with a user account with enough permissions to create and access AWS structures:

  1. Go to IAM → Users.

  2. Click the Add users button.

  3. Enter the required value in the field User name.

  4. Enable the checkbox Access key - Programmatic access.

  5. Click the Next: Permissions button.

  6. Choose the box with the text Attach existing policies directly.

  7. Use the search box to locate all your required policies and check the boxes at the left of each policy name.

  8. Click Next: Tags. Optionally, add any desired tags to the new user.

  9. Click Next: Review.

  10. Click the Create user button.

  11. A new Access Key ID and Secret Access Key will be created. You can click the Download .csv button to download a copy to your local or you can copy the values shown on the screen. These will be used as the AWS collector credentials.

  12. Finally, click the Close button.

Expand
titleAssuming a role (Self Account)

It is a best practice to assume roles that are granted just the required privileges to perform an action. If the customer does not want to use their own AWS user to perform these actions required by the collector (because it has far more privileges than required), they can use this option. Note that this option requires the use of AWS account credentials. To avoid sharing those credentials, check the Cross Account option below.

The customer must attach the required policies in AWS to the role that is going to be assumed. For more information about the AssumeRole feature, check the AWS documentation.

Regarding configuration, these are the fields required to use this way of authentication:

Code Block
...,
"credentials":{
  "access_key": "<CUSTOMER_AWS_ACCOUNT_ACCESS_KEY>",
  "access_secret": "<CUSTOMER_AWS_ACCOUNT_SECRET_ACCESS_KEY>",
  "base_assume_role": "arn:aws:iam::<CUSTOMER_AWS_ACCOUNT_ID>:role/<ROLE_TO_BE_ASSUMED>"
}
...,
  • access_key is the Access Key ID provided by AWS during the user creation process to those users that ask for programmatic access.

  • access_secret is the Secret Access Key provided by AWS during the user creation process to those users that ask for programmatic access. This value is shown once, so it must be saved upon creation.

  • base_assume_role is the ARN of the role that is going to be assumed by the user authenticated with the parameters above, access_key and access_secret. This role has to be properly granted to allow the actions the collector is going to perform.

Expand
titleAssuming a role (Cross Account)

If the customer does not want to share their credentials with Devo, there is another way to run the collector. It is called Cross Account and AssumeRole functionality should be used in this case and it is explained step by step in this article.

Besides, some parameters must be added to the configuration file (config.json or equivalent config.yaml for on-prem). In the credentials section, instead of sharing access_key and access_secret, these other parameters must be used:

Code Block
...,
"credentials":{
  "base_assume_role": "arn:aws:iam::<BASE_SYSTEM_AWS_ACCOUNT_ID>:role/<BASE_SYSTEM_ROLE>",
  "target_assume_role": "arn:aws:iam::<CUSTOMER_AWS_ACCOUNT_ID>:role/<CUSTOMER_ROLE_TO_BE_ASSUMED>",
  "assume_role_external_id": "<OPTIONAL__ANY_STRING_YOU_WANT>"
}
...,
  • base_assume_role is the ARN of the role that is going to be assumed by the profile bound to the machine/instance where the collector is running. As explained in the link above, this role is going to be trusted by the customer’s AWS account, so it can assume the role in the target account. That role assumed from the customer’s account will allow the collection of data without the need of sharing the credentials. This role already exists in Devo’s AWS account and to deploy the collector on Devo’s Collector Server its value must be arn:aws:iam::837131528613:role/devo-xaccount-cs-role.

  • target_assume_role is the ARN of the role in the customer’s AWS account. That role will allow the collector to have access to the resources specified in that role. To keep your data secure, please, use policies that grant just the necessary permissions.

  • assume_role_external_id is an optional parameter to add more security to this Cross Account operation. This value should be a string added in the request to assume the customer’s role.

...

All the service events that are generated on AWS are managed by Cloudwatch. However, Devo’s AWS collector offers two different services that collect Cloudwatch events:

  • sqs-cloudwatch-consumer - This service is used to collect Security Hub events.

  • service-events-all -This service is used to collect events from the rest of the services on AWS.

Note

The AWS services generate service events per region, so the next instructions should be applied in each region where the collecting of information is required (use the same values for all your configured regions).

In order to collect these service events, there are some structures that must be created: one FIFO queue in the SQS service and one Rule+Target in the CloudWatch service.

Note

If the auto-setup functionality is enabled in the configuration and the related credentials have enough permissions to create all the required AWS structures, the following steps are not required.

For a manual creation of these required structures, follow the next steps (click to expand):

Expand
titleSQS FIFO queue creation
  1. Go to Simple Queue Service and click Create queue.

  2. In the Details section, choose the FIFO queue type and set the Name field value you prefer (it must finish with the .fifo suffix).

  3. In the Configuration section, set the Message retention period field value to 5 Days. Be sure that the Content-based deduplication checkbox is marked and leave the rest of the options with their default values.

  4. In the Access policy section, choose the method Basic and select the option Only the queue owner to receive and send permissions.

  5. Optionally, in the Tags section, you can create a tag with Key usedBy and Value devo-collector.

  6. Click Create queue.

Expand
titleCloudWatch Rule + Target creation
  1. Go to CloudWatch → Rules and click Create rule.

  2. In the Event Source section, select Event Pattern and Build event pattern to match events by service.

  3. In the Service Name field, enter the service to be monitored (Check the note below for Security Hub Findings)

  4. In the Event Type field, choose All Events (Check the note below for Security Hub Findings)

  5. In the Targets section, click Add target and choose SQS queue as the target type.

  6. In the Queue dropdown, choose the previously created queue.

  7. In the Message group ID field, set the value devo-collector.

  8. Then, click Configure details.

  9. In the Rule definition section, set the Name you prefer. Be sure that State checkbox is marked.

Note

To retrieve Security Hub Findings, select Security Hub in the Service Name field, and Security Hub Findings - Custom Action in the Event Type field.

...

No actions are required in the Cloudtrail service to retrieve this kind of information.

...

No actions are required in the CloudWatch Metrics service to retrieve this kind of information.

...

Logs can be collected from different services. Depending on the service type, you may need to apply some settings on AWS:

CloudWatch Logs

No actions are required in this service to retrieve this kind of information.

VPC Flow Logs

Before enabling the generation of these logs, you must create one bucket in the S3 service and one FIFO queue in the SQS service. For a manual creation of these required structures, follow these steps (click to expand):

Expand
titleSQS queue creation
  1. Go to Simple Queue Service and click Create queue.

  2. In the Details section, choose the Standard queue type and set the Name field value you prefer.

  3. In the Configuration section, set the Message retention period field value to 5 Days and leave the rest of the options with their default values.

  4. In the Access policy section, choose the method Advanced and replace "Principal": {"AWS":"<account_id>"} by "Principal": "*" Leave the rest of the JSON as default.

  5. Optionally, in the Tags section, you can create a tag with Key usedBy and Value devo-collector.

  6. Click Create queue.

Expand
title S3 bucket creation/configuration
  1. Go to S3 and click Create bucket.

  2. Set the preferred value in the Bucket name field.

  3. Choose the required Region value and click Next.

  4. Optionally, in the Tags section, you can create a tag with Key usedBy and Value devo-collector. Leave the rest of the fields with their default values and click Next.

  5. Click Create bucket.

  6. Mark the checkbox next to the previously created S3 bucket.

  7. In the popup box, click Copy Bucket ARN and save the content. You will need it later.

  8. In the S3 bucket list, click the previously created bucket name link.

  9. Click the Properties tab, then click the Events box.

  10. Click Add notification.

  11. Set the preferred value in the Name field.

  12. Select the All object create events checkbox.

  13. In the Send to field, select SQS Queue.

  14. Select the previously created SQS queue in the SQS field.

Expand
title VPC service

Once the required AWS structures are created, go to the VPC service and follow these steps:

  1. Select any available VPC (or create a new one).

  2. Go to the Flow Logs tab and click Create flow log.

  3. Choose the preferred Filter value and the required Maximum aggregation interval value.

  4. In the Destination field, select Send to an S3 bucket.

  5. In the S3 bucket ARN field set the ARN of the previously created S3 bucket.

  6. Make sure that the Format field has the value AWS default format.

  7. Optionally, in the Tags section, you can create a tag with Key usedBy and Value devo-collector.

  8. Finally, click Create.

CloudFront Logs

Before enabling the generation of these logs, you must create one bucket in the S3 service and one FIFO queue in the SQS service. For a manual creation of these required structures, follow these steps (click to expand):

Expand
title SQS queue creation
  1. Go to Simple Queue Service and click Create queue.

  2. In the Details section, choose the Standard queue type and set the Name field value you prefer.

  3. In the Configuration section, set the Message retention period field value to 5 Days and leave the rest of the options with their default values.

  4. In the Access policy section, choose the method Advanced and replace "Principal": {"AWS":"<account_id>"} by "Principal": "*" Leave the rest of the JSON as default.

  5. Optionally, in the Tags section, you can create a tag with Key usedBy and Value devo-collector.

  6. Click Create queue.

Expand
titleS3 bucket creation/configuration
  1. Go to S3 and click Create bucket.

  2. Set the preferred value in the Bucket name field.

  3. Choose the required Region value and click Next.

  4. Optionally, in the Tags section, you can create a tag with Key usedBy and Value devo-collector. Leave the rest of the fields with their default values and click Next.

  5. Click Create bucket.

  6. Mark the checkbox next to the previously created S3 bucket.

  7. In the popup box, click Copy Bucket ARN and save the content. You will need it later.

  8. In the S3 bucket list, click the previously created bucket name link.

  9. Click the Properties tab, then click the Events box.

  10. Click Add notification.

  11. Set the preferred value in the Name field.

  12. Select the All object create events checkbox.

  13. In the Send to field, select SQS Queue.

  14. Select the previously created SQS queue in the SQS field.

Expand
titleCloudFront service

Once the required AWS structures are created, go to the CloudFront service and follow these steps:

  1. Click the ID of the target Distribution item and access the Distributing Settings options. Then, click Edit.

  2. In the Logging field, select On.

  3. In the Bucket for Logs field, enter the ARN of the previously created S3 bucket.

  4. Finally, click the Yes, Edit button.

Collector service details

The following tables show details about the predefined services available to be used in the collector configuration.

...

Devo collector service name

...

Complete service name

...

CloudWatch filter used

...

CloudTrail source filter used

...

Metrics namespace used

...

Description

...

Service events
(type: events)

...

Audit events
(type: audits)

...

Metrics
(type: metrics)

...

Logs
(type: logs)

...

service-events-all

...

All service events

...

{"account":["<account_id>"]}

...

N/A

...

N/A

...

This service will collect all service events information available in the CloudWatch service, no matter the source defined in the event.

...

...

X

...

X

...

X

...

audit-events-all

...

All audit events

...

N/A

...

all_sources

...

N/A

...

This service will collect all audit events information available in the CloudTrail service, no matter the source defined in the event.

...

X

...

...

X

...

X

...

metrics-all

...

All metrics

...

N/A

...

N/A

...

all_metrics_namespaces

...

This service will collect all metric information from CloudWatch service. Metrics from all the available metric namespaces will be retrieved.

...

X

...

X

...

...

X

...

<cwl_custom>

...

CloudWatch Logs

...

N/A

...

N/A

...

N/A

...

This service will collect the different “Log Streams” that are part of a “Log Group” from the CloudWatch Logs service. Since it is common to have more than one “Log Group” defined, this will require creating one <cwl_custom> entry per “Log Group”.

...

X

...

X

...

X

...

...

non-cloudwatch-logs

...

Non-CloudWatch Logs

...

N/A

...

N/A

...

N/A

...

This service will collect data from the following services VPC Flow Logs and CloudFront Logs.

...

X

...

X

...

X

...

...

sqs-cloudwatch-consumer

...

Service events generated by CloudWatch Events service

...

Check more info here.

...

N/A

...

N/A

...

This service will collect all Security Hub findings that have been sent to CloudWatch, no matter the source defined in the finding.

...

...

X

...

X

...

X

Info

In the service-events-all collector service, the <account_id> string is automatically replaced with the real value.

Note

The values entered in <cwl_custom> must be unique values.

Collector configuration details

Depending on the data type chosen for collecting, the following service definitions could be added to the configuration inside the services section. The following are common properties that all services have:

  • regions (mandatory) - It must be a list with valid target region names to be used when collecting data. One processing thread will be created per region. See more info about the available regions here.

  • request_period_in_seconds (optional) - The period in seconds to be used between pulling executions (default value: 60)

  • pull_retries (optional) - Number of retries that will be executed when a pulling error occurs (default value: 3)

  • tag (optional) - Used for sending the data to a table different from the default one (in the configuration examples, they appear as commented lines).

Global predefined services

These service definitions can be used for collecting in a global way the different data types available in AWS.

Service events

This is the configuration to be used when any service event needs to be collected from AWS, except Security Hub.

Code Block
service-events-all:
  #tag: my.app.aws_service_events
  cloudwatch_sqs_queue_name: <queue_name>
  #auto_event_type: <bool>
  regions:
    - <region_a>
    - <region_b>
    - <region_c>
Info

The default target table is cloud.aws.cloudwatch.events

This is the configuration to be used when Security Hub events need to be collected.

Code Block
sqs-cloudwatch-consumer:
  #tag: <str>
  cloudwatch_sqs_queue_name: <queue_name>
  #auto_event_type: <bool>
  regions:
    - <region_a>
    - <region_b>
    - <region_c>
Note

The SQS queue name is required

Info

The default target table is cloud.aws.securityhub.findings

All audit events

There are two ways to get audit events. In case just a few events are going to be generated in the platform, using the API may be enough. However, when mid or high volumes are expected, saving those audit events in an S3 bucket would be the best choice. In this case, an SQS queue should be created to consume those events from the collector.

This is how the config file should be defined to retrieve audit events via API:

Code Block
audit-events-all:
  #tag: <str with {placeholders}>
  #types:
    #- audits_api <str>
  #auto_event_type: <bool>
  #request_period_in_seconds: <int>
  #start_time: <datetime_iso8601_format>
  #drop_event_names: ["event1", "event2"] <list of str>
  regions:
    - <region_a>
    - <region_b>
    - <region_c>

...

Field

...

Type

...

Mandatory

...

Description

...

tag

...

string

...

no

...

Tag or tag format to be used. i.e.:

  • my.app.aws_audit_events

  • cloud.aws.cloudtrail.{event_type}.{account_id}.{region_id}.{collector_version}

...

types

...

list of strings (in yaml format)

...

no

...

Enable/Disable modules only when several modules per service are defined. To get audit events from API, this field should be set to audits_api.

...

request_period_in_seconds

...

integer

...

no

...

Period in seconds used between each data pulling, this value will overwrite the default value (60 seconds)

...

start_time

...

datetime

...

no

...

Datetime from which to start collecting data. It must match ISO-8601 format.

...

auto_event_type

...

boolean

...

no

...

Used to enable the auto categorization of message tagging.

...

drop_event_names

...

list of strings

...

no

...

If the value in eventName field matches any of the values in this field, the event will be discarded.

i.e. if this parameter is populated with the next values ["Decrypt", "AssumeRole"], and the value of eventName field is Decrypt or AssumeRole, the event will be discarded.

...

regions

...

list of strings (in yaml format)

...

yes, if defined in the “Collector definitions”.

...

Property name (regions) should be aligned with the one defined in the submodules_property property from the “Collector definitions”

On the other hand, if S3 + SQS is the chosen option to get the audit events, the config file should match the following format:

Code Block
audit-events-all:
  #tag: <str with {placeholders}>
  #types:
    #- audits_s3 <str>
  #request_period_in_seconds: <int>
  #start_time: <datetime_iso8601_format>
  #auto_event_type: <bool>
  audit_sqs_queue_name: <str>
  #s3_file_type_filter: <str (RegEx)>
  #use_region_and_account_id_from_event: <bool>
  regions:
    - region_a <str>
    - region_b <str>
    - region_c <str>
Info

The default target table is cloud.aws.cloudwatch.events

...

Field

...

Type

...

Mandatory

...

Description

...

tag

...

string

...

no

...

Tag or tag format to be used. i.e.:

  • my.app.aws_audit_events

  • cloud.aws.cloudtrail.{event_type}.{account_id}.{region_id}.{collector_version}

...

types

...

list of strings (in yaml format)

...

no

...

Enable/Disable modules only when several modules per service are defined

...

request_period_in_seconds

...

integer

...

no

...

Period in seconds used between each data pulling, this value will overwrite the default value (60 seconds)

...

start_time

...

datetime

...

no

...

Datetime from which to start collecting data. It must match ISO-8601 format.

...

auto_event_type

...

boolean

...

no

...

Used to enable the auto categorization of message tagging.

...

audit_sqs_queue_name

...

string

...

yes

...

Name of the SQS queue to read from.

...

s3_file_type_filter

...

string

...

no

...

RegEx to retrieve proper file type from S3

...

use_region_and_account_id_from_event

...

bool

...

no

...

If true the region and account_id are taken from the event; else if false, they are taken from the account used to do the data pulling. Default: true

...

regions

...

list of strings (in yaml format)

...

yes, if defined in the “Collector definitions”.

...

Property name (regions) should be aligned with the one defined in the submodules_property property from the “Collector definitions”

All metrics

Code Block
metrics-all:
  #tag: my.app.aws_metrics
  regions:
    - <region_a>
    - <region_b>
    - <region_c>
Info

The default target table is cloud.aws.cloudwatch.metrics

CloudWatch Logs

An entry per Log Stream that wanted to be processed must be defined. In this example, two different entries have been created (cwl_1, cwl_2) for processing the Log Streams called /aws/log_stream_a and /aws/log_stream_b

Code Block
cwl_1:
  #tag: my.app.aws_cwl
  types:
    -logs
  log_group: /aws/log_stream_a
  regions:
    - <region_a>
    - <region_b>
    - <region_c>
cwl_2:
  #tag: my.app.aws_cwl
  types:
    -logs
  log_group: /aws/log_stream_b
  regions:
    - <region_a>
    - <region_b>
    - <region_c>
Note

As shown in the examples, the types list must be fixed with the log values.

Info

The default target table is cloud.aws.cloudwatch.logs

Non-CloudWatch Logs

Code Block
non-cloudwatch-logs:
  #tag: my.app.aws_cwl
  #vpcflowlogs_sqs_queue_name: <custom_queue_a>
  #cloudfront_sqs_queue_name: <custom_queue_b>
  #auto_event_type: <bool>
  regions:
    - <region_a>
    - <region_b>
    - <region_c>
Info

The default target tables are cloud.aws.vpc.flowlogs and cloud.aws.cloudfront.

Note

The default existing expected SQS queue names for this service are devo-ncwl-vpcfl-<short_unique_identifier> and devo-ncwl-cfl-<short_unique_identifier>

Info

The properties vpcflowlogs_sqs_queue_name and cloudfrontlogs_sqs_queue_name can be used for using custom queue names instead of the default expected ones

Run the collector

Once the data source is configured, you can send us the required information and we will host and manage the collector for you (Cloud collector), or you can host the collector in your own machine using a Docker image (On-premise collector).

...

Rw tab
titleOn-premise collector

This data collector can be run in any machine that has the Docker service available because it should be executed as a docker container. The following sections explain how to prepare all the required setup for having the data collector running.

Structure

...

Code Block
<any_directory>
└── devo-collectors/
    └── aws/
        ├── certs/
        │   ├── chain.crt
        │   ├── <your_domain>.key
        │   └── <your_domain>.crt
        └── config/ 
            └── config-aws.yaml

Devo credentials

In Devo, go to Administration → Credentials → X.509 Certificates, download the Certificate, Private key and Chain CA and save them in <any directory>/devo-collectors/aws/certs. Learn more about security credentials in Devo here.

...

Editing the config-aws.yaml file

In the config-aws.yaml file, replace the <short_unique_identifier>, <access_key_value>, <access_secret_value>, <region_*> and <log_group_value_*> values and enter the ones that you got in the previous steps. In the <short_unique_identifier>, <region_*> and <log_group_value_*> placeholders, enter the value that you choose.

Code Block
globals:
  debug: false
  id: not_used
  name: aws
  persistence:
    type: filesystem
    config:
      directory_name: state
outputs:
  devo_1:
    type: devo_platform
    config:
      address: collector-eu.devo.io
      port: 443
      type: SSL
      chain: chain.crt
      cert: <your_domain>.crt
      key: <your_domain>.key
inputs:
  aws:
    id: <short_unique_identifier>
    enabled: true
    requests_per_second: 5
    autoconfig:
      enabled: true
      refresh_interval_in_seconds: 600
    credentials:
      access_key: <access_key_value>
      access_secret: <access_secret_value>
    services:
       service-events-all:
        #tag: my.app.aws_service_events
        #sqs_queue_name: <custom_queue_name>.fifo
        regions:
          - <region_a>
          - <region_b>
          - <region_c>
      audit-events-all:
        #tag: my.app.aws_audit_events
        regions:
          - <region_a>
          - <region_b>
          - <region_c>
      metrics-all:
        #tag: my.app.aws_metrics
        regions:
          - <region_a>
          - <region_b>
          - <region_c>
      non-cloudwatch-logs:
        #tag: my.app.aws_non_cwl
        #cloudfrontlogs_sqs_queue_name: <custom_queue_name>
        #vpcflowlogs_sqs_queue_name: <custom_queue_name>
        regions:
          - <region_a>
          - <region_b>
          - <region_c>
      cwl_1:
        #tag: my.app.aws_cwl
        types:
          - logs
        log_group: <log_group_a>
        regions:
          - <region_a>
          - <region_b>
          - <region_c>
      cwl_2:
        #tag: my.app.aws_cwl
        types:
          - logs
        log_group: <log_group_b>
        regions:
          - <region_a>
          - <region_b>
          - <region_c>
Info

By default, there are several SQS queue name patterns that the collector will try to use for retrieving information from AWS. Depending on the info type, the following patterns exist:

  • Service events: devo-<service_name>-<short_unique_identifier>.fifo. The property sqs_queue_name can be used to choose a custom name.

  • VPC Flow Logs: devo-ncwl-vpcfl-<short_unique_identifier>. The property vpcflowlogs_sqs_queue_name can be used to choose a custom name.

  • CloudFront: devo-ncwl-cfl-<short_unique_identifier>. The property cloudfrontlogs_sqs_queue_name can be used to choose a custom name.

Download the Docker image

The collector should be deployed as a Docker container. Download the Docker image of the collector as a .tgz file by clicking the link in the following table:

...

Collector Docker image

...

SHA-256 hash

...

collector-aws-docker-image-1.4.1.tgz

...

21e735b6338537632396171bd09829508949947fb672f14543ce97a475bc72b3

Use the following command to add the Docker image to the system:

Code Block
gunzip -c collector-aws-docker-image-<version>.tgz | docker load
Info

Once the Docker image is imported, it will show the real name of the Docker image (including version info). Replace "<version>" with a proper value.

The Docker image can be deployed on the following services:

  • Docker

  • Docker Compose

Docker

Execute the following command on the root directory <any_directory>/devo-collectors/aws/

Code Block
docker run \
--name collector-aws \
--volume $PWD/certs:/devo-collector/certs \
--volume $PWD/config:/devo-collector/config \
--volume $PWD/state:/devo-collector/state \
--env CONFIG_FILE=config-aws.yaml \
--rm -it docker.devo.internal/collector/aws:<version>
Note

Replace <version> with a proper value.

Docker Compose

The following Docker Compose file can be used to execute the Docker container. It must be created in the <any_directory>/devo-collectors/aws/ directory.

Code Block
version: '3'
services:
  collector-aws:
    build:
      context: .
      dockerfile: Dockerfile
    image: docker.devo.internal/collector/aws:${IMAGE_VERSION:-latest}
    container_name: collector-aws
    volumes:
      - ./certs:/devo-collector/certs
      - ./config:/devo-collector/config
      - ./state:/devo-collector/state
    environment:
      - CONFIG_FILE=${CONFIG_FILE:-config-aws.yaml}
Rw tab
titleCloud collector

...

Table of Contents
maxLevel2
typeflat

Configuration requirements

To run this collector, there are some configurations detailed below that you need to consider:

Configuration

Details

Credentials

There are several available options to define credentials.

Service events

All the service events that are generated on AWS are managed by Cloudwatch. However, Devo’s AWS collector offers two different services that collect Cloudwatch events:

  • sqs-cloudwatch-consumer - This service is used to collect Security Hub events.

  • service-events-all -This service is used to collect events from the rest of the services on AWS.

Audit events

For the S3+SQS approach (setting types as audits_s3) some previous configuration is required.

Logs

Logs can be collected from different services. Depending on the type, some previous setups must be applied on AWS.

Info

More information

Refer to the Vendor setup section to know more about these configurations.

Overview

Amazon Web Services (AWS) provides on-demand cloud computing platforms and APIs to individual companies. Each available AWS service generates information related to different aspects of its functionality. The available data types include service events, audit events, metrics, and logs.

You can use the AWS collector to retrieve data from the AWS APIs and send it to your Devo domain. Once the gathered information arrives at Devo, it will be processed and included in different tables in the associated Devo domain so users can analyze it.

Devo collector features

Feature

Details

Allow parallel downloading (multipod)

  • Not allowed

Running environments

  • Collector server

  • On-premise

Populated Devo events

  • Table

Flattening preprocessing

  • No

Data sources

Data Source

Description

API Endpoint

Collector service name

Devo Table

Available from release

Service events

The different available services in AWS usually generate some information related to their internal behaviors, such as "a virtual machine has been started", "a new file has been created in an S3 bucket" or "an AWS lambda function has been invoked" and this kind of event can be triggered by no human interaction.

The service events are managed by the CloudWatch Events service (CWE), recently AWS has created a new service called Amazon EventBridge that tends to replace the CWE service.

The findings detected by AWS Security Hub are also managed by CloudWatch Events (CWE).

ReceiveMessage

ReceiveMessage - Amazon Simple Queue Service

Generic events:

service-events-all

Security Hub events:

sqs-cloudwatch-consumer

Generic events:

  • If auto_event_type parameter in config file is not set or set to false: cloud.aws.cloudwatch.events

  • If auto_event_type parameter in config file is set to true: cloud.aws.cloudwatch.{event_type}

Security Hub events:

  • cloud.aws.securityhub.findings

-

Audit events

This kind of event is more specific because they are triggered by a human interaction no matter the different ways used: API, web interaction, or even the CLI console.

The audit events are managed by the CloudTrail service.

There are two ways to read Audit events:

  • API: using CloudTrail API. This way is slower, but it can retrieve data back in time.

  • S3+SQS: forwarding CloudTrail data to an S3 bucket and reading from there through a SQS queue. This way is much faster, but it only can retrieve elements since the creation of the S3+SQS pipeline.

Via API:

LookupEvents

LookupEvents - AWS CloudTrail

Via S3+SQS:

ReceiveMessage

ReceiveMessage - Amazon Simple Queue Service

audit-events-all

  • If auto_event_type parameter in config file is not set or set to false: cloud.aws.cloudtrail.events

  • If auto_event_type parameter in config file is set to true: cloud.aws.cloudtrail.{event_type}

-

Metrics

According to the standard definition, this kind of information is usually generated at the same moment is requested because it is usually a query about the status of a service (all things inside AWS are considered services).

AWS makes something slightly different because what is doing is to generate metrics information every N time slots, such as 1 min, 5 min, 30 min, 1h, etc., even if no one makes a request (also is possible to have information every X seconds but this would require extra costs).

The metrics are managed by the CloudWatch Metrics service (CWM).

ListMetrics

ListMetrics - Amazon CloudWatch

After listing the metrics, GetMetricData and GetMetricStatistics are also called.

GetMetricData - Amazon CloudWatch

GetMetricStatistics - Amazon CloudWatch

 

metrics-all

cloud.aws.cloudwatch.metrics

-

Logs

Logs could be defined as information with a non-fixed structure that is sent to one of the available “logging” services, these services are CloudWatch Logs and S3.

There are some very customizable services, such as AWS Lambda, or even any developed application which is deployed inside an AWS virtual machine (EC2), that can generate custom log information, this kind of information is managed by the CloudWatch Logs service (CWL) and also by the S3 service.

There are also some other services that can generate logs with a fixed structure, such as VPC Flow Logs or CloudFront Logs. These kinds of services require one special way of collecting their data.

DescribeLogStreams

DescribeLogStreams - Amazon CloudWatch Logs

Logs can be:

  • Managed by Cloudwatch: This is a custom service that is activated using service custom_service and including the type logs into the types parameter in the config file.

  • Not managed by Cloudwatch: Use non-cloudwatch-logs service and include the required type (flowlogs for VPC Flow Logs and/or cloudfrontlogs for CloudFront Logs) into the types parameter in the config file.

 

  • Managed by Cloudwatch: cloud.aws.cloudwatch.logs

  • Not managed by Cloudwatch:

    • VPC Flow Logs:

      • If auto_event_type parameter in config file is set to true: cloud.aws.vpc.unknown

      • If auto_event_type parameter in config file is set to true: cloud.aws.vpc.{event_type}

    • CloudFront Logs:

      • If auto_event_type parameter in config file is set to true: cloud.aws.cloudfront.unknown

      • If auto_event_type parameter in config file is set to true: cloud.aws.cloudfront.{event_type}

-

Vendor setup

There are some minimal requirements to set up this collector.

  1. AWS console access: Credentials are required to access the AWS console.

  2. Owner or Administrator permissions within the AWS console, or the fill access to configure AWS services.

Some manual actions are necessary in order to get all the required information or services and allow Devo to gather information from AWS. The following sections describe how to get the required AWS credentials and how to proceed with the different required setups depending on the gathered information type.

Credentials

It’s recommended to have available or create the following IAM policies before the creation of the IAM user that will be used for the AWS collector.

Expand
titlePolicy details

Some collector services require the creation of some IAM policies before creating the IAM user that will be used for the AWS collector. The following table contains the details about the policies that could be used by the AWS collector:

Source type

AWS Data Bus

Recommended policy name

Variant

 Details

Service events

CloudWatch Events

devo-cloudwatch-events

All resources

Tip

It’s not required the creation of any new policy due to there are not needed any permissions

Audit events

CloudTrail API

devo-cloudtrail-api

All resources

Code Block
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "cloudtrail:LookupEvents",
            "Resource": "*"
        }
    ]
}

 

 

Specific resource

Note

There is no way for limiting the accessed resources

CloudTrail S3+SQS

devo-cloudtrail-s3

All resources

Code Block
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "s3:GetObject",
            "Resource": "*"
        }
    ]
}

 

 

 

Specific S3 bucket

Info

Note that the value for the property called Resource should be changed with the proper value

Note

It very important the /* string at the end of each bucket name

 

Code Block
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "s3:GetObject",
            "Resource": [
                "arn:aws:s3:::devo-cloudtrail-storage-bucket1/*",
                "arn:aws:s3:::devo-cloudtrail-storage-bucket2/*"
            ]
        }
    ]
}

Metrics

CloudWatch Metrics

devo-cloudwatch-metrics

All resources

Code Block
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "cloudwatch:GetMetricData",
                "cloudwatch:ListMetrics"
            ],
            "Resource": "*"
        }
    ]
}

Specific resource

Note

There is no way for limiting the accessed resources

Logs

CloudWatch Logs

devo-cloudwatch-logs

All log groups

Info

Note that the value for property Resource should be adapted with the proper account id value.

Code Block
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "logs:DescribeLogGroups",
                "logs:DescribeLogStreams",
                "logs:FilterLogEvents"
            ],
            "Resource": "arn:aws:logs:*:936082584952:log-group:*"
        }
    ]
}

Specific log groups

Info

Note that values inside the Resources property are only examples and they should be changed with the proper values.

Code Block
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "logs:DescribeLogGroups",
                "logs:DescribeLogStreams",
                "logs:FilterLogEvents"
            ],
            "Resource": [
                "arn:aws:logs:*:936082584952:log-group:/aws/events/devo-cloudwatch-test-1:*",
                "arn:aws:logs:*:936082584952:log-group:/aws/events/devo-cloudwatch-test-2:*"
            ]
        }
    ]
}

Logs to S3 + SQS

devo-vpcflow-logs

All resources

Code Block
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "s3:GetObject",
            "Resource": "*"
        }
    ]
}

Specific resource

Code Block
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::vpc-flowlogs-test1/*"
        }
    ]
}
Info

All the previous policies are defined to be AWS region agnostic, which means, that they will be valid for any AWS region.

Expand
titleUsing a user account and local policies

Depending on which source types are collected, one or more of the policies described above will be used. Once the required policies are created, each one must be associated with an IAM user. To create it, visit the AWS Console and log in with a user account with enough permissions to create and access AWS structures:

  1. Go to IAM → Users

  2. Click Add users button.

  3. Enter the required value in the filed User name.

  4. Enable the checkbox Access key - Programmatic access.

  5. Click on Next: Tags button.

  6. Click on Next: Review button.

  7. Click on Create user button.

  8. The Access Key ID and Secret Key will show. Click Download.csv button and save it.

Expand
titleAssuming a role (self-account)

It is best practice to assume roles that are granted just the required privileges to perform an action. If the customer does not want to use their own AWS use to perform these actions required by the collector - because it has far more privileges than required - they can use this option. Note that this option requires the use of AWS account credentials. To avoid sharing those credentials, check the Cross Account section below.

Then the customer must attach the required policies to AWS to the role that is going to be assumed.

  1. Go to IAM → Roles.

  2. Click on Create role button.

  3. In the Trusted entity type, select AWS account and then select This account (123456789012).

  4. Add the required policies.

  5. Give a name to the role.

  6. Click on Create role.

You should also add authentication credentials to the configuration. Add the next fields into the configuration:

  • access_key: This is the Access Key ID provided by AWS during the user creation process.

  • access_secret: This is the Secret Access Key provided by AWS during the user creation process.

  • base_assume_role: This is the ARN of the role that is going to be assumed by the user authenticated with the parameters above, access_key and access_secret. This role has to be properly granted to allow the actions that the collector is going to perform.

These fields need to be in the credentials and are required to use this authentication method:

Code Block
...,
"credentials":{
  "access_key": "<CUSTOMER_AWS_ACCOUNT_ACCESS_KEY>",
  "access_secret": "<CUSTOMER_AWS_ACCOUNT_SECRET_ACCESS_KEY>",
  "base_assume_role": "arn:aws:iam::<CUSTOMER_AWS_ACCOUNT_ID>:role/<ROLE_TO_BE_ASSUMED>"
}
...,
Expand
titleAssuming a role (cross-account)

In case you don't want to share your credentials with Devo, you should add some parameters to the configuration file. In the credentials section, instead. of sharing access_key and access_secret. Follow these steps to allow this authentication:

  1. Prepare the environment to allow Devo’s cloud collector server to assume roles cross-account.

  2. Add ARNs for each role into the configuration:

    • base_assume_role: This is the ARN of the role that is going to be assumed by the profile bound to the machine/instance where the collector is running. This role already exists in Devo's AWS account and deploying it on Devo's Collector Server and its value must be: arn:aws:iam::837131528613:role/devo-xaccount-cs-role.

    • target_assume_role: This is the ARN of the role in the AWS account. This role allows the collector to have access to the resources specified in this role. To keep your data secure, please, use policies that grant just the necessary permissions.

    • assume_role_external_id : This is an optional parameter to add more security to this Cross Account operation. This value should be a string added to the request to assume the customer’s role.

Note

Credentials

This authentication method has not shared credentials. This fields needs to be in the credentials and are all required, except assume_role_external_id which is optional:

Code Block
...,
"credentials":{
  "base_assume_role": "arn:aws:iam::<BASE_SYSTEM_AWS_ACCOUNT_ID>:role/<BASE_SYSTEM_ROLE>",
  "target_assume_role": "arn:aws:iam::<CUSTOMER_AWS_ACCOUNT_ID>:role/<CUSTOMER_ROLE_TO_BE_ASSUMED>",
  "assume_role_external_id": "<OPTIONAL__ANY_STRING_YOU_WANT>"
}
...,

Service Events

Cloudwatch manages all the service events that have been generated on AWS. However, Devo’s AWS Collector offers two different services that collect Cloudwatch Events:

  1. sqs-cloudwatch-consumer: This service is used to collect Security Hub events.

  2. service-events-all: This service is used to collect events from the rest of the services on AWS

Info

Service events

Some previous configurations are required if you want to use any of these services. The AWS services generate service events per region, so the following instructions should be applied in each region where the collecting information is required. There are some structures that you need to create for collecting these service events: FIFO queue in the SWS service and Rule+Target in the CloudWatch service.

If you want to create them manually, click on each one to follow the steps.

Expand
titleSQS FIFO queue creation
  1. Go to Simple Queue Service and click on Create queue.

  2. In the Details section. Choose FIFO queue type and set the name field value you prefer. It must end with .fifo suffix.

  3. In the Configuration section. Set the Message retention period field value to 5 days. Be sure that Content-based deduplication checkbox is marked.

  4. In the Access policy section. Choose method Basic and choose Only the queue owner for receiving and sending permissions.

  5. Optional step. Create one tag with Key usedby and value devo-collector.

  6. Click on Create queue.

Expand
titleEventBridge Rule + Target creation
  1. Go to EventBridge, expand Evvents in the left-menu side and click on Rules.

  2. In the Defined rule detail section, fill the required data and select the Rule type called Rule with an event pattern.

  3. In the Build event pattern section, select All events.*

  4. In the Select Target section, select AWS target as a target type and fill the SQS queue information. In the Message group ID write devo-collector.

  5. Optional step. Configure tags section.

  6. In the Review and create section, just check the different sections and once everything is correct, click on Create rule.

Info

( * )Note for Security Hub

To retrieve Security Hub Findings, in Build event pattern section, select AWS events or EventBridge partner events in Event source. Then, go to Sample events - optional part and select AWS events in Sample event type. In Sample events select Security Hub Findings - Custom Action

Steps to enable Audit Events

No actions are required in Cloudtrail Service for retrieving this kind of information when the API approach is used (setting types as audit_apis).

For the S3+SQS approach (setting types as audits_s3) some previous configuration is required. Find a complete description of how to create an S2 +SQS pipeline here.

Steps to enable Metrics

No actions are required in CloudWatch Metrics service for retrieving this kind of information.

Steps to enable Logs

Logs can be collected from different services. Depending on the type, some previous setups must be applied on AWS:

Expand
titleCloudWatch Logs

No actions are required in this service for retrieving this kind of information.

Expand
titleVPC Flow Logs

Before enabling the generation of these logs some structures must be created: one bucket in the S3 service and one FIFO queue in the SQS service.

Follow the steps to create those structures manually:

Create SQS Stadard queue

  1. Go to Simple Queue Service and click on Create queue.

  2. In the Details section. Choose the Standard queue type.

  3. In the Configuration section set the Message retention period field value to 5 days and leave the rest of the values from the Configuration section with the default ones.

  4. In the Access Policy section choose method Advanced and adapt this value Principal: {"AWS·:·<account_id>"}.

  5. Optional. In the Tag section create one tag with key “usedBy” and value “devo-collector”.

  6. Click on Create queue button.

Create or configure S3 bucket

  1. Go to S3 and click on Create bucket button.

  2. Set the preferred value in the Bucket name field.

  3. Choose any Region value.

  4. Click on the next button.

  5. Optional. Create one tag with Key. “usedBy” and value “devo-collector”.

  6. Leave all values with the default ones and click on the next button.

  7. Click on Create bucket button.

  8. Mark the checkbox next to the previously created S3 Bucket.

  9. Mark the checkbox next to the previously created S3 bucket.

  10. n the popup box click on Copy Bucket ARN button and save the content for being used in the next steps.

  11. In S3 bucket list click on the previously created bucket name link.

  12. Click on the Properties tab.

  13. Click on the Events box.

  14. Click on the Add notification link.

  15. Set the preferred value in Name field.

  16. Mark All object create events checkbox.

  17. In Send to field select SQS Queue as value.

  18. Select the previously created SQS queue in SQS field.

Create Flow Log

  1. Go to VPC service.

  2. Select any available VPC (or create a new one).

  3. Choose Flow Logs tab.

  4. Click on Create flow log button.

  5. Choose the preferred Filter value.

  6. Choose the preferred Maximum aggregation interval value.

  7. Select as Destination field value Send to an S3 bucket.

  8. In S3 bucket ARN field value set the ARN of the previously created S3 bucket (Saved in a previous step).

  9. Be sure that the format field has set the value AWS default format.

  10. Optional. Create one tag with Key "usedBy" and Value "devo-collector"

  11. Click on Create button.

Expand
titleCloudfront Logs

Before enabling the generation of these logs some structures must be created: one bucket in the S3 service and one FIFO queue in the SQS service.

For the manual creation of these required structures, please follow the next steps:

Create SQS Standard queue

  1. Go to Simple Queue Service and click on Create queue button.

  2. In the Details section choose Standardqueue typ and set the Name field value you prefer.

  3. In the Configuration section set the Message retention period field value to 5 Day and leave the rest values from Configuration section with the default ones.

  4. In the Access policy section choose method Advance and replace "Principal": {"AWS":"<account_id>"} with "Principal": "*" (leave rest of JSON as come).

  5. Optional. In the Tags section create one tag with Key "usedBy" and Value "devo-collector"

  6. Click on Create queue button.

Create or configure S3 bucket

  1. Go to S3 and click on Create bucket button.

  2. Set the preferred value in the Bucket name field.

  3. Choose any Region value.

  4. Click on the next button.

  5. Optional. Create one tag with Key. “usedBy” and value “devo-collector”.

  6. Leave all values with the default ones and click on the next button.

  7. Click on Create bucket button.

  8. Mark the checkbox next to the previously created S3 Bucket.

  9. Mark the checkbox next to the previously created S3 bucket.

  10. n the popup box click on Copy Bucket ARN button and save the content for being used in the next steps.

  11. In S3 bucket list click on the previously created bucket name link.

  12. Click on the Properties tab.

  13. Click on the Events box.

  14. Click on the Add notification link.

  15. Set the preferred value in Name field.

  16. Mark All object create events checkbox.

  17. In Send to field select SQS Queue as value.

  18. Select the previously created SQS queue in SQS field.

Allow Loggin in Cloudfront

  1. Go to Cloudfront service.

  2. Click on ID field link of the target Distribution item (for accessing to Distributing Settings options).

  3. Click on the Edit button.

  4. In Logging choose the value On.

  5. In the Bucket for Logs field value set the ARN of the previously created S3 bucket (Saved in a previous step).

  6. Click on Yes and then click on the Edit button.

Minimum configuration required for basic pulling

Although this collector supports advanced configuration, the fields required to retrieve data with basic configuration are defined below.

Info

This minimum configuration refers exclusively to those specific parameters of this integration. There are more required parameters related to the generic behavior of the collector. Check setting sections for details.

Setting

Details

access_key

This is the account identifier for AWS. More info can be found in the section Using a user account and local policies.

access_secret

This is the secret (kind of a password) for AWS. More info can be found in the section Using a user account and local policies.

base_assume_role

This allows assuming a role with limited privileges to access AWS services. More info can be found in the sections Assuming a role (self-account) and/or Assuming a role (cross-account).

target_assume_role

This allows assuming a role on another AWS account with limited privileges to access AWS services. More info can be found in the section Assuming a role (cross-account).

assume_role_external_id

This is an optional field that provides additional security to the assuming role operation on cross-accounts. More info can be found in the section Assuming a role (cross-account).

Info

See the Accepted authentication methods section to verify what settings are required based on the desired authentication method.

Accepted authentication methods

Depending on how did you obtain your credentials, you will have to either fill in or delete the following properties on the JSON credentials configuration block.

Authentication Method

access_key

access_secret

base_assume_role

target_assume_role

assume_role_external_id

Access Key / Access Secret

Status
colourGreen
titleREQUIRED

Status
colourGreen
titleREQUIRED

 

 

 

Assume role (self-account)

Status
colourGreen
titleREQUIRED

Status
colourGreen
titleREQUIRED

Status
colourGreen
titleREQUIRED

 

 

Assume role (cross-account)

 

 

Status
colourGreen
titleREQUIRED

Status
colourGreen
titleREQUIRED

Status
colourYellow
titleOPTIONAL

Run the collector

Once the data source is configured, you can either send us the required information if you want us to host and manage the collector for you (Cloud collector), or deploy and host the collector in your own machine using a Docker image (On-premise collector).

Rw ui tabs macro
Rw tab
titleOn-premise collector

This data collector can be run in any machine that has the Docker service available because it should be executed as a docker container. The following sections explain how to prepare all the required setup for having the data collector running.

Structure

The following directory structure should be created for being used when running the collector:

Code Block
<any_directory>
└── devo-collectors/
    └── <product_name>/
        ├── certs/
        │   ├── chain.crt
        │   ├── <your_domain>.key
        │   └── <your_domain>.crt
        ├── state/
        └── config/ 
            └── config.yaml 
Note

Replace <product_name> with the proper value.

Devo credentials

In Devo, go to Administration → Credentials → X.509 Certificates, download the Certificate, Private key and Chain CA and save them in <product_name>/certs/. Learn more about security credentials in Devo here.

Image Added
Note

Replace <product_name> with the proper value.

Editing the config.yaml file

Code Block
globals:
  debug: false
  id: <collector_id>
  name: <collector_name>
  persistence:
    type: filesystem
    config:
      directory_name: state
outputs:
  devo_1:
    type: devo_platform
    config:
      address: <devo_address>
      port: 443
      type: SSL
      chain: <chain_filename>
      cert: <cert_filename>
      key: <key_filename>
inputs:
  aws:
    id: <short_unique_id>
    enabled: true
    credentials:
      access_key: <access_key_value>
      access_secret: <access_secret_value>
      base_assume_role: <base_assume_role_value>
      target_assume_role: <target_assume_role_value>
      assume_role_external_id: <assume_role_external_id_value>
    services:
      service-events-all:
        request_period_in_seconds: <request_period_in_seconds_value>
        cloudwatch_sqs_queue_name: <sqs_queue_name_value>
        auto_event_type: <auto_event_type_value>
        regions: <list_of_regions>
      sqs-cloudwatch-consumer:
        request_period_in_seconds: <request_period_in_seconds_value>
        cloudwatch_sqs_queue_name: <sqs_queue_name_value>
        auto_event_type: <auto_event_type_value>
        regions: <list_of_regions>
      audit-events-all:
        types: <list_of_types>
        request_period_in_seconds: <request_period_in_seconds_value>
        start_time: <start_time_value>
        auto_event_type: <auto_event_type_value>
        drop_event_names: <list_of_drop_event_names>
        audit_sqs_queue_name: <sqs_queue_name_value>
        s3_file_type_filter: <s3_file_type_filter_value>
        use_region_and_account_id_from_event: <use_region_and_account_id_from_event_value>
        regions: <list_of_regions>
      metrics-all:
        regions: <list_of_regions>
      non-cloudwatch-logs:
        types: <list_of_types>
        regions: <list_of_regions>
        start_time: <start_time_value>
        vpcflowlogs_sqs_queue_name: <sqs_queue_name_value>
        cloudfrontlogs_sqs_queue_name: <sqs_queue_name_value>
      custom_service:
        types: <list_of_types>
        log_groups: <log_group_value>
        start_time: <start_time_value>
        regions: <list_of_regions>
Info

All defined service entities will be executed by the collector. If you do not want to run any of them, just remove the entity from the services object.

Replace the placeholders with your required values following the description table below:

Parameter

Data Type

Type

Value Range

Details

collector_id

int

Mandatory

Minimum length: 1
Maximum length: 5

Use this param to give an unique id to this collector.

collector_name

str

Mandatory

Minimum length: 1
Maximum length: 10

Use this param to give a valid name to this collector.

devo_address

str

Mandatory

collector-us.devo.io
collector-eu.devo.io

Use this param to identify the Devo Cloud where the events will be sent.

chain_filename

str

Mandatory

Minimum length: 4
Maximum length: 20

Use this param to identify the chain.cert  file downloaded from your Devo domain. Usually this file's name is: chain.crt

cert_filename

str

Mandatory

Minimum length: 4
Maximum length: 20

Use this param to identify the file.cert downloaded from your Devo domain.

key_filename

str

Mandatory

Minimum length: 4
Maximum length: 20

Use this param to identify the file.key downloaded from your Devo domain.

short_unique_id

int

Mandatory

Minimum length: 1
Maximum length: 5

Use this param to give an unique id to this input service.

Note

This parameter is used to build the persistence address, do not use the same value for multiple collectors. It could cause a collision.

access_key_value

str

See Accepted authentication methods section above.

Minimum length: 1

The access key ID value obtained from AWS when a user is created to access programmatically. It is used when authenticating with a user account and also to assume a self-account role.

access_secret_value

str

See Accepted authentication methods section above.

Minimum length: 1

The secret access key value obtained from AWS when a user is created to access programatically. It is used when authenticating with a user account and also to assume a self-account role.

base_assume_role_value

str

See Accepted authentication methods section above.

Minimum length: 1

The ARN of the role to be assumed in the base account. It can be used for self- or cross-account authentication methods.

target_assume_role_value

str

See Accepted authentication methods section above.

Minimum length: 1

The ARN of the role to be assumed in the customer’s account. It is used for cross-account authentication method.

assume_role_external_id_value

str

See Accepted authentication methods section above.

Minimum length: 1

This is an optional string implemented by the customer to add an extra security layer. It can only be used for cross-account authentication method.

request_period_in_seconds_value

int

Optional

Minimum length: 1

Period in seconds used between each data pulling, this value will overwrite the default value (300 seconds)

Info

This parameter should be removed if it is not used.

auto_event_type_value

bool

Optional

true/false

Used to enable the auto categorization of message tagging.

start_time_value

datetime

Optional

1970-01-01T00:00:00.000Z

Datetime from which to start collecting data. It must match ISO-8601 format.

list_of_types

list (of strings)

Optional

Code Block
types:
  - type1
  - type2
  - type3

Enable/Disable modules only when several modules per service are defined. For example, to get audit events from the API, this field should be set to audits_api.

list_of_regions

list (of strings)

Mandatory, if defined in the collector’s definition.

Code Block
regions:
  - region1
  - region2
  - region3

Property name (regions) should be aligned with the one defined in the submodules_property property from the “Collector definitions”

list_of_drop_event_names

list (of strings)

Optional

Code Block
drop_event_names:
  - drop1
  - drop2
  - drop3

If the value in eventName field matches any of the values in this field, the event will be discarded.

i.e. if this parameter is populated with the next values ["Decrypt", "AssumeRole"], and the value of eventName field is Decrypt or AssumeRole, the event will be discarded.

sqs_queue_name_value

str

Mandatory

Minimum length: 1

Name of the SQS queue to read from.

s3_file_type_filter_value

str

Optional

Minimum length: 1

RegEx to retrieve proper file type from S3, in case there are more than one file types in the same SQS queue from which the service is pulling data.

This parameter can be used for those services getting data from a S3+SQS pipeline.

use_region_and_account_id_from_event_value

bool

Optional

true/false

If true the region and account_id are taken from the event; else if false, they are taken from the account used to do the data pulling.

Default: true

It can be used only in those services using a S3+SQS pipeline.

log_group_value

str

Mandatory

Minimum length: 1

The log group name must be set here as-is, including the different levels separated by slashes.

Download the Docker image

The collector should be deployed as a Docker container. Download the Docker image of the collector as a .tgz file by clicking the link in the following table:

Collector Docker image

SHA-256 hash

collector-aws-docker-image-1.4.1.tgz

21e735b6338537632396171bd09829508949947fb672f14543ce97a475bc72b3

Use the following command to add the Docker image to the system:

Code Block
gunzip -c <image_file>-<version>.tgz | docker load
Note

Once the Docker image is imported, it will show the real name of the Docker image (including version info). Replace <image_file> and <version> with a proper value.

The Docker image can be deployed on the following services:

Docker

Execute the following command on the root directory <any_directory>/devo-collectors/<product_name>/

Code Block
docker run 
--name collector-<product_name> 
--volume $PWD/certs:/devo-collector/certs 
--volume $PWD/config:/devo-collector/config 
--volume $PWD/state:/devo-collector/state 
--env CONFIG_FILE=config.yaml 
--rm 
--interactive 
--tty 
<image_name>:<version>
Note

Replace <product_name>, <image_name> and <version> with the proper values.

Docker Compose

The following Docker Compose file can be used to execute the Docker container. It must be created in the <any_directory>/devo-collectors/<product_name>/ directory.

Code Block
version: '3'
services:
  collector-<product_name>:
    image: <image_name>:${IMAGE_VERSION:-latest}
    container_name: collector-<product_name>
    volumes:
      - ./certs:/devo-collector/certs
      - ./config:/devo-collector/config
      - ./credentials:/devo-collector/credentials
      - ./state:/devo-collector/state
    environment:
      - CONFIG_FILE=${CONFIG_FILE:-config.yaml}

To run the container using docker-compose, execute the following command from the <any_directory>/devo-collectors/<product_name>/ directory:

Code Block
IMAGE_VERSION=<version> docker-compose up -d
Note

Replace <product_name>, <image_name> and <version> with the proper values.

Rw tab
titleCloud collector

We use a piece of software called Collector Server to host and manage all our available collectors. If you want us to host this collector for you, get in touch with us and we will guide you through the configuration.

Collector services detail

This section is intended to explain how to proceed with specific actions for services.

Events service

Expand
titleVerify data collection

Once the collector has been launched, it is important to check if the ingestion is performed in a proper way. To do so, go to the collector’s logs console.

This service has the following components:

Component

Description

Setup

The setup module is in charge of authenticating the service and managing the token expiration when needed.

Puller

The setup module is in charge of pulling the data in a organized way and delivering the events via SDK.

Setup output

A successful run has the following output messages for the setup module:

Code Block
INFO InputProcess::AwsCloudwatchEventsPullerSetup(aws,aws#abc123,service-events-all#predefined,us-east-2) -> Session cannot expire. Using user/profile authentication.
INFO InputProcess::AwsCloudwatchEventsPullerSetup(aws,aws#abc123,service-events-all#predefined,us-east-2) -> Creating user session
INFO InputProcess::AwsCloudwatchEventsPullerSetup(aws,aws#abc123,service-events-all#predefined,us-east-2) -> New AWS session started.
INFO InputProcess::AwsCloudwatchEventsPullerSetup(aws,aws#abc123,service-events-all#predefined,us-east-2) -> Setup for module <AwsCloudwatchEventsPuller> has been successfully executed

Puller output

A successful initial run has the following output messages for the puller module:

Info

Note that the PrePull action is executed only one time before the first run of the Pull action.

Code Block
INFO InputProcess::AwsCloudwatchEventsPuller(aws,abc123,service-events-all,predefined,us-east-2) -> Starting data collection every 60 seconds
INFO InputProcess::AwsCloudwatchEventsPuller(aws,abc123,service-events-all,predefined,us-east-2) -> Starting a new pulling from "dc-aws-cloudwatch-test-1.fifo" queue at "2022-09-23T07:44:54.589769+00:00"
INFO InputProcess::AwsCloudwatchEventsPuller(aws,abc123,service-events-all,predefined,us-east-2) -> Received 198 response(s), received 1973 message(s), generated 1973 message(s), detected_event_types: ["ssm", "s3", "sts", "backup", "kms", "tag", "config", "logs", "cloudtrail"], avg_time_per_source_message: 335.170 ms
INFO InputProcess::AwsCloudwatchEventsPuller(aws,abc123,service-events-all,predefined,us-east-2) -> Starting a new pulling from "dc-aws-cloudwatch-test-1.fifo" queue at "2022-09-23T07:55:55.546142+00:00"
INFO InputProcess::AwsCloudwatchEventsPuller(aws,abc123,service-events-all,predefined,us-east-2) -> Received 1 response(s), received 0 message(s), generated 0 message(s), detected_event_types: [], avg_time_per_source_message: 437.862 ms
INFO InputProcess::AwsCloudwatchEventsPuller(aws,abc123,service-events-all,predefined,us-east-2) -> Data collection completed. Elapsed time: 0.438 seconds. Waiting for 59.562 second(s) until the next one

After a successful collector’s execution (that is, no error logs found), you will see the following log message:

Code Block
INFO ThreatQuotientDataPuller(threatquotient_collector,threatquotient_data_puller#111,events#predefined) -> Statistics for this pull cycle (@devo_pulling_id=1655983326.290848): Number of requests performed: 2; Number of events received: 52; Number of duplicated events filtered out: 0; Number of events generated and sent: 52 (from 52 unflattened events); Average of events per second: 92.99414315733.
Info

The value @devo_pulling_id is injected in each event to group all events ingested by the same pull action. You can use it to get the exact events downloaded in that Pull action in Devo’s search window.

Expand
titleRestart the persistence

This collector does not use any kind of persistent storage.

Audit events (via API)

This service reads Cloudtrail audit events via API.

There are two ways to read Cloudtrail events: via API or via S3+SQS.

  • API: It is slower, but can read past events.

  • S3+SQS: It is much faster, but can only read events since the creation of the queue.

This service makes use of the AWS API to get the data.

Expand
titleDevo categorization and destination

If auto_event_type parameter is not set or is set to false, the events are going to be ingested into the table cloud.aws.cloudtrail.events.

If auto_event_type parameter is set to true, the events are going to be ingested into the table cloud.aws.cloudtrail.{event_type}.

Expand
titleVerify data collection

Once the collector has been launched, it is important to check if the ingestion is performed in a proper way. To do so, go to the collector’s logs console.

This service has the following components:

Component

Description

Setup

The setup module is in charge of authenticating the service and managing the token expiration when needed.

Puller

The setup module is in charge of pulling the data in a organized way and delivering the events via SDK.

Setup output

A successful run has the following output messages for the setup module:

Code Block
INFO InputProcess::AwsCloudtrailApiPullerSetup(aws,aws#abc123,audit-events-all#predefined,us-east-2) -> Session cannot expire. Using user/profile authentication.
INFO InputProcess::AwsCloudtrailApiPullerSetup(aws,aws#abc123,audit-events-all#predefined,us-east-2) -> Creating user session
INFO InputProcess::AwsCloudtrailApiPullerSetup(aws,aws#abc123,audit-events-all#predefined,us-east-2) -> New AWS session started.
INFO InputProcess::AwsCloudtrailApiPullerSetup(aws,aws#abc123,audit-events-all#predefined,us-east-2) -> Setup for module <AwsCloudtrailApiPuller> has been successfully executed

Puller output

A successful initial run has the following output messages for the puller module:

Info

Note that the PrePull action is executed only one time before the first run of the Pull action.

Code Block
INFO InputProcess::AwsCloudtrailApiPuller(aws,abc123,audit-events-all,predefined,us-east-2) -> Starting data collection every 60 seconds
INFO InputProcess::AwsCloudtrailApiPuller(aws,abc123,audit-events-all,predefined,us-east-2) -> Starting a new pulling from "['all_sources']" source at "2022-09-23T08:56:22.366820+00:00"
INFO InputProcess::AwsCloudtrailApiPuller(aws,abc123,audit-events-all,predefined,us-east-2) -> Using 15 minutes as "gap until now", start_date: "2022-09-12T12:34:56.123456+00:00", end_date: "2022-09-23T08:41:22.366820+00:00", time_slot_in_hours: "1"
INFO InputProcess::AwsCloudtrailApiPuller(aws,abc123,audit-events-all,predefined,us-east-2) -> Total number of time slots to be processed: 261
...
INFO InputProcess::AwsCloudtrailApiPuller(aws,abc123,audit-events-all,predefined,us-east-2) -> Number of processed time slots so far: 100
...
INFO InputProcess::AwsCloudtrailApiPuller(aws,abc123,audit-events-all,predefined,us-east-2) -> Number of processed time slots so far: 200
...
INFO InputProcess::AwsCloudtrailApiPuller(aws,abc123,audit-events-all,predefined,us-east-2) -> Received 1315 response(s), messages (total/dropped/other_region/duplicated/generated): 124/6149/0/0/113, tag template used: "cloud.aws.cloudtrail.{event_type}.123456789012.us-east-8.1.prod-1", avg_time_per_source_message: 708.624 ms
INFO InputProcess::AwsCloudtrailApiPuller(aws,abc123,audit-events-all,predefined,us-east-2) -> Elapsed time: 931.842 seconds. Last retrieval took too much time, no wait will be applied in this loop
INFO InputProcess::AwsCloudtrailApiPuller(aws,abc123,audit-events-all,predefined,us-east-2) -> Data collection completed. Elapsed time: 2.717 seconds. Waiting for 57.283 second(s) until the next one

After a successful collector’s execution (that is, no error logs found), you will see the following log message:

Code Block
INFO InputProcess::AwsCloudtrailApiPuller(aws,abc123,audit-events-all,predefined,us-east-2) -> Received 4 response(s), messages (total/dropped/other_region/duplicated/generated): 186/8/0/1/177, tag template used: "cloud.aws.cloudtrail.{event_type}.123456789012.us-west-8.1.prod-1", avg_time_per_source_message: 678.952 ms
Info

The value @devo_pulling_id is injected in each event to group all events ingested by the same pull action. You can use it to get the exact events downloaded in that Pull action in Devo’s search window.

Audit events (via S3 + SQS)

This service reads Cloudtrail audit events via the S3+SQS pipeline.

There are two ways to read Cloudtrail events: via API or via S3+SQS.

  • API: It is slower, but can read past events.

  • S3+SQS: It is much faster, but can only read events since the creation of the queue.

Expand
titleDevo categorization and destination

If auto_event_type parameter is not set or is set to false, the events are going to be ingested into the table cloud.aws.cloudtrail.events.

If auto_event_type parameter is set to true, the events are going to be ingested into the table cloud.aws.cloudtrail.{event_type}.

Expand
titleVerify data collection

Once the collector has been launched, it is important to check if the ingestion is performed in a proper way. To do so, go to the collector’s logs console.

This service has the following components:

Component

Description

Setup

The setup module is in charge of authenticating the service and managing the token expiration when needed.

Puller

The setup module is in charge of pulling the data in a organized way and delivering the events via SDK.

Setup output

A successful run has the following output messages for the setup module:

Code Block
INFO InputProcess::AwsSqsS3GenericPullerSetup(aws,aws#abc123,audit-events-all#predefined,us-east-2) -> Session cannot expire. Using user/profile authentication.
INFO InputProcess::AwsSqsS3GenericPullerSetup(aws,aws#abc123,audit-events-all#predefined,us-east-2) -> Creating user session
INFO InputProcess::AwsSqsS3GenericPullerSetup(aws,aws#abc123,audit-events-all#predefined,us-east-2) -> New AWS session started.
INFO InputProcess::AwsSqsS3GenericPullerSetup(aws,aws#abc123,audit-events-all#predefined,us-east-2) -> Setup for module <AwsSqsS3CloudTrailPuller> has been successfully executed

Puller output

A successful initial run has the following output messages for the puller module:

Info

Note that the PrePull action is executed only one time before the first run of the Pull action.

Code Block
INFO InputProcess::AwsSqsS3CloudTrailPuller(aws,abc123,audit-events-all,predefined,us-east-2) -> Starting data collection every 60 seconds
INFO OutputProcess::OutputStandardConsumer(standard_senders_consumer_0) -> Consumed messages: 1797, total_bytes: 3830368 (60.43562 seconds)
INFO OutputProcess::DevoSender(standard_senders,devo_sender_0) -> Consumed messages: 1797 messages (60.436958 seconds) => 29 msg/sec
INFO OutputProcess::OutputStandardConsumer(standard_senders_consumer_0) -> Consumed messages: 1652, total_bytes: 3555837 (60.311803 seconds)
INFO OutputProcess::DevoSender(standard_senders,devo_sender_0) -> Consumed messages: 1652 messages (60.313064 seconds) => 27 msg/sec
INFO OutputProcess::OutputStandardConsumer(standard_senders_consumer_0) -> Consumed messages: 1949, total_bytes: 4277470 (60.187779 seconds)
INFO OutputProcess::DevoSender(standard_senders,devo_sender_0) -> Consumed messages: 1949 messages (60.187248 seconds) => 32 msg/sec
...

After a successful collector’s execution (this is, no error logs were found), you should be able to see the following log message:

Info

The @devo_pulling_id value is injected into each event to allow grouping all events ingested by the same pull action. You can use it to get the exact events downloaded on that Pull action in Loxcope.

Metrics (All metrics)

This service could be considered a general AWS metric puller. It reads metrics from all the AWS services that generate them. Those metrics are also managed by Cloudwatch.

This service makes use of the AWS API to get the data.

Expand
titleDevo categorization and destination

All the events are going to be ingested into the table cloud.aws.cloudwatch.metrics.

Expand
titleVerify data collection

Once the collector has been launched, it is important to check if the ingestion is performed in a proper way. To do so, go to the collector’s logs console.

This service has the following components:

Component

Description

Setup

The setup module is in charge of authenticating the service and managing the token expiration when needed.

Puller

The setup module is in charge of pulling the data in a organized way and delivering the events via SDK.

Setup output

A successful run has the following output messages for the setup module:

Code Block
INFO InputProcess::AwsCloudwatchMetricPullerSetup(aws,aws#123,ec2#predefined,us-east-2) -> Session cannot expire. Using user/profile authentication.
INFO InputProcess::AwsCloudwatchMetricPullerSetup(aws,aws#123,ec2#predefined,us-east-2) -> Creating user session
INFO InputProcess::AwsCloudwatchMetricPullerSetup(aws,aws#123,ec2#predefined,us-east-2) -> New AWS session started.
INFO InputProcess::AwsCloudwatchMetricPullerSetup(aws,aws#123,ec2#predefined,us-east-2) -> Setup for module <AwsCloudwatchMetricPuller> has been successfully executed

Puller output

A successful initial run has the following output messages for the puller module:

Info

Note that the PrePull action is executed only one time before the first run of the Pull action.

Code Block
INFO InputProcess::AwsCloudwatchMetricPuller(aws,123,ec2,predefined,us-east-2) -> Starting data collection every 60 seconds
INFO InputProcess::AwsCloudwatchMetricPuller(aws,123,ec2,predefined,us-east-2) -> Starting a new pulling from "['AWS/EC2', 'AWS/EC2Spot']" namespaces at "2022-09-23T14:49:36.266007+00:00"
INFO InputProcess::AwsCloudwatchMetricPuller(aws,123,ec2,predefined,us-east-2) -> Time range: "2022-09-23T14:48:00Z" > "2022-09-23T14:49:00Z"
INFO InputProcess::AwsCloudwatchMetricPuller(aws,123,ec2,predefined,us-east-2) -> Received 3 response(s), generated 17 message(s), tag used: "cloud.aws.cloudwatch.metrics.936082584952.us-east-2.1", avg_time_per_source_message: 393.845 ms
INFO InputProcess::AwsCloudwatchMetricPuller(aws,123,ec2,predefined,us-east-2) -> Applied an offset to wait, retrieval_offset: -36.266007 seconds
INFO InputProcess::AwsCloudwatchMetricPuller(aws,123,ec2,predefined,us-east-2) -> Data collection completed. Elapsed time: 1.182 seconds. Waiting for 22.552 second(s) until the next one

After a successful collector’s execution (this is, no error logs were found), you should be able to see the following log message:

Info

The @devo_pulling_id value is injected into each event to allow grouping all events ingested by the same pull action. You can use it to get the exact events downloaded on that Pull action in Loxcope.

Non Cloudwatch Logs

This service reads logs from some AWS services, but those logs are not managed by Cloudwatch. These logs are stored in an S3 bucket and read through an SQS queue, so it is using an S3+SQS pipeline.

The implemented services currently are:

  • VPC Flow Logs

  • Cloudfront Logs

Expand
titleDevo categorization and destination

For VPC Flow Logs:

  • If auto_event_type parameter is not set or is set to false, the events are going to be ingested into the table cloud.aws.vpc.unknown

  • If auto_event_type parameter is set to true, the events are going to be ingested into the table cloud.aws.vpc.{event_type}.

For Cloudfront Logs:

  • If auto_event_type parameter is not set or is set to false, the events are going to be ingested into the table cloud.aws.cloudfront.unknown

  • If auto_event_type parameter is set to true, the events are going to be ingested into the table cloud.aws.cloudfront.{event_type}.

Expand
titleVerify data collection

Once the collector has been launched, it is important to check if the ingestion is performed in a proper way. To do so, go to the collector’s logs console.

This service has the following components:

Component

Description

Setup

The setup module is in charge of authenticating the service and managing the token expiration when needed.

Puller

The setup module is in charge of pulling the data in a organized way and delivering the events via SDK.

Setup output

A successful run has the following output messages for the setup module:

Code Block
INFO InputProcess::AwsSqsS3GenericPullerSetup(aws,aws#123,non-cloudwatch-logs#predefined,us-east-2) -> Session cannot expire. Using user/profile authentication.
INFO InputProcess::AwsSqsS3GenericPullerSetup(aws,aws#123,non-cloudwatch-logs#predefined,us-east-2) -> Creating user session
INFO InputProcess::AwsSqsS3GenericPullerSetup(aws,aws#123,non-cloudwatch-logs#predefined,us-east-2) -> New AWS session started.

Puller output

A successful initial run has the following output messages for the puller module:

Info

Note that the PrePull action is executed only one time before the first run of the Pull action.

Code Block
INFO InputProcess::AwsSqsS3GenericPullerSetup(aws,aws#123,non-cloudwatch-logs#predefined,us-east-2) -> Setup for module <AwsSqsS3VpcFlowlogsPuller> has been successfully executed
INFO InputProcess::AwsSqsS3VpcFlowlogsPuller(aws,123,non-cloudwatch-logs,predefined,us-east-2) -> Starting data collection every 60 seconds
INFO InputProcess::AwsSqsS3VpcFlowlogsPuller(aws,123,non-cloudwatch-logs,predefined,us-east-2) -> Received 2 response(s), messages (fromSQS/generated): 0/0, discarded files: 0, avg_time_per_source_message: 169.711 ms
INFO InputProcess::AwsSqsS3VpcFlowlogsPuller(aws,123,non-cloudwatch-logs,predefined,us-east-2) -> Data collection completed. Elapsed time: 0.340 seconds. Waiting for 59.660 second(s) until the next one

After a successful collector’s execution (that is, no error logs found), you will see the following log message:

Code Block
INFO InputProcess::AwsSqsS3VpcFlowlogsPuller(aws,123,non-cloudwatch-logs,predefined,us-east-2) -> Received 2 response(s), messages (fromSQS/generated): 0/0, discarded files: 0, avg_time_per_source_message: 169.711 ms
Info

The value @devo_pulling_id is injected in each event to group all events ingested by the same pull action. You can use it to get the exact events downloaded in that Pull action in Devo’s search window.

Custom Logs

This service reads logs from some AWS services and these logs are managed by Cloudwatch. Cloudwatch creates log groups to store the different log sources, so it is required to use a custom puller in order to read from different log groups at the same time. This service makes use of the AWS API to get the data.

Expand
titleDevo categorization and destination

All the events are going to be ingested into the table cloud.aws.cloudwatch.logs

Expand
titleVerify data collection

Once the collector has been launched, it is important to check if the ingestion is performed in a proper way. To do so, go to the collector’s logs console.

This service has the following components:

Component

Description

Setup

The setup module is in charge of authenticating the service and managing the token expiration when needed.

Puller

The setup module is in charge of pulling the data in a organized way and delivering the events via SDK.

Setup output

A successful run has the following output messages for the setup module:

Code Block
INFO InputProcess::AwsCloudwatchLogsPullerSetup(aws,aws#123,cwl_1#custom,us-east-2) -> Session cannot expire. Using user/profile authentication.
INFO InputProcess::AwsCloudwatchLogsPullerSetup(aws,aws#123,cwl_1#custom,us-east-2) -> Creating user session
INFO InputProcess::AwsCloudwatchLogsPullerSetup(aws,aws#123,cwl_1#custom,us-east-2) -> New AWS session started.
INFO InputProcess::AwsCloudwatchLogsPullerSetup(aws,aws#123,cwl_1#custom,us-east-2) -> Setup for module <AwsCloudwatchLogsPuller> has been successfully executed

Puller output

A successful initial run has the following output messages for the puller module:

Info

Note that the PrePull action is executed only one time before the first run of the Pull action.

Code Block
INFO InputProcess::AwsCloudwatchLogsPuller(aws,123,cwl_1,custom,us-east-2) -> Starting data collection every 60 seconds
INFO InputProcess::AwsCloudwatchLogsPuller(aws,123,cwl_1,custom,us-east-2) -> Starting a new pulling from "/aws/events/devo-cloudwatch-test-1" at "2022-09-23T15:08:18.132865+00:00"
INFO InputProcess::AwsCloudwatchLogsPuller(aws,123,cwl_1,custom,us-east-2) -> Optimized first retrieval approach for high number of log streams with medium size

Collector operations

This section is intended to explain how to proceed with the specific operations of this collector.

Expand
titleVerify collector operations

Initialization

The initialization module is in charge of setup and running the input (pulling logic) and output (delivering logic) services and validating the given configuration.

A successful run has the following output messages for the initializer module:

Code Block
2022-09-23T17:08:14.833    INFO MainProcess::MainThread -> (CollectorMultiprocessingQueue) standard_queue_multiprocessing -> max_size_in_messages: 10000, max_size_in_mb: 1024, max_wrap_size_in_items: 100
2022-09-23T17:08:15.119    INFO MainProcess::MainThread -> [OUTPUT] OutputMultiprocessingController::__init__ Configuration -> {...}
2022-09-23T17:08:15.120    INFO MainProcess::MainThread -> OutputProcess - Starting thread (executing_period=300s)
2022-09-23T17:08:15.122    INFO MainProcess::MainThread -> InputProcess - Starting thread (executing_period=300s)
2022-09-23T17:08:15.122    INFO OutputProcess::MainThread -> Process started
2022-09-23T17:08:15.124    INFO InputProcess::MainThread -> Process Started
2022-09-23T17:08:15.128    INFO InputProcess::MainThread -> InputThread(aws,123) - Starting thread (execution_period=600s)
2022-09-23T17:08:15.128    INFO InputProcess::MainThread -> ServiceThread(aws,123,cwl_1,custom) - Starting thread (execution_period=600s)
2022-09-23T17:08:15.128    INFO InputProcess::MainThread -> AwsCloudwatchLogsPullerSetup(aws,aws#123,cwl_1#custom,us-east-2) -> Starting thread
2022-09-23T17:08:15.128    INFO InputProcess::MainThread -> AwsCloudwatchLogsPuller(aws,123,cwl_1,custom,us-east-2) - Starting thread
2022-09-23T17:08:15.128    INFO OutputProcess::MainThread -> [INTERNAL LOGIC] DevoSender::_validate_kwargs_for_method__init__ -> The <address> does not appear to be an IP address and cannot be verified: collector-eu.devo.io
2022-09-23T17:08:15.132    INFO OutputProcess::MainThread -> [INTERNAL LOGIC] DevoSender::_validate_kwargs_for_method__init__ -> The <address> does not appear to be an IP address and cannot be verified: collector-eu.devo.io
2022-09-23T17:08:15.135    INFO OutputProcess::MainThread -> [INTERNAL LOGIC] DevoSender::_validate_kwargs_for_method__init__ -> The <address> does not appear to be an IP address and cannot be verified: collector-eu.devo.io
2022-09-23T17:08:15.136    INFO OutputProcess::MainThread -> DevoSender(standard_senders,devo_sender_0) -> Starting thread
2022-09-23T17:08:15.136    INFO OutputProcess::MainThread -> DevoSenderManagerMonitor(standard_senders,devo_2) -> Starting thread (every 300 seconds)
2022-09-23T17:08:15.136    INFO OutputProcess::MainThread -> DevoSenderManager(standard_senders,manager,devo_2) -> Starting thread
2022-09-23T17:08:15.136    INFO OutputProcess::MainThread -> DevoSender(lookup_senders,devo_sender_0) -> Starting thread
2022-09-23T17:08:15.137    INFO OutputProcess::MainThread -> DevoSenderManagerMonitor(lookup_senders,devo_2) -> Starting thread (every 300 seconds)
2022-09-23T17:08:15.137    INFO OutputProcess::MainThread -> DevoSenderManager(lookup_senders,manager,devo_2) -> Starting thread
2022-09-23T17:08:15.137    INFO OutputProcess::MainThread -> DevoSender(internal_senders,devo_sender_0) -> Starting thread
2022-09-23T17:08:15.137    INFO OutputProcess::MainThread -> DevoSenderManagerMonitor(internal_senders,devo_2) -> Starting thread (every 300 seconds)
2022-09-23T17:08:15.137    INFO OutputProcess::MainThread -> DevoSenderManager(internal_senders,manager,devo_2) -> Starting thread
2022-09-23T17:08:15.144    INFO InputProcess::MainThread -> [GC] global: 36.4% -> 36.5%, process: RSS(36.91MiB -> 38.38MiB), VMS(345.21MiB -> 345.45MiB)
2022-09-23T17:08:15.151    INFO OutputProcess::MainThread -> [GC] global: 36.4% -> 36.5%, process: RSS(37.10MiB -> 39.08MiB), VMS(921.24MiB -> 921.24MiB)

Events delivery and Devo ingestion

The event delivery module receives the events from the internal queues where all events are injected by the pullers and delivering them using the selected compatible delivery method.

A successful run has the following output messages for the initializer module:

Code Block
INFO OutputProcess::SyslogSenderManagerMonitor(standard_senders,sidecar_0) -> Number of available senders: 1, sender manager internal queue size: 0
INFO OutputProcess::SyslogSenderManagerMonitor(standard_senders,sidecar_0) -> enqueued_elapsed_times_in_seconds_stats: {}
INFO OutputProcess::SyslogSenderManagerMonitor(standard_senders,sidecar_0) -> Sender: SyslogSender(standard_senders,syslog_sender_0), status: {"internal_queue_size": 0, "is_connection_open": True}
INFO OutputProcess::SyslogSenderManagerMonitor(standard_senders,sidecar_0) -> Standard - Total number of messages sent: 44, messages sent since "2022-06-28 10:39:22.511671+00:00": 44 (elapsed 0.007 seconds)
INFO OutputProcess::SyslogSenderManagerMonitor(internal_senders,sidecar_0) -> Number of available senders: 1, sender manager internal queue size: 0
INFO OutputProcess::SyslogSenderManagerMonitor(internal_senders,sidecar_0) -> enqueued_elapsed_times_in_seconds_stats: {}
INFO OutputProcess::SyslogSenderManagerMonitor(internal_senders,sidecar_0) -> Sender: SyslogSender(internal_senders,syslog_sender_0), status: {"internal_queue_size": 0, "is_connection_open": True}
INFO OutputProcess::SyslogSenderManagerMonitor(internal_senders,sidecar_0) -> Internal - Total number of messages sent: 1, messages sent since "2022-06-28 10:39:22.516313+00:00": 1 (elapsed 0.019 seconds)
Info

By default, these information traces will be displayed every 10 minutes.

Sender services

The Integrations Factory Collector SDK has 3 different senders services depending on the event type to delivery (internal, standard, and lookup). This collector uses the following Sender Services:

Sender services

Description

internal_senders

In charge of delivering internal metrics to Devo such as logging traces or metrics.

standard_senders

In charge of delivering pulled events to Devo.

Sender statistics

Each service displays its own performance statistics that allow checking how many events have been delivered to Devo by type:

Logging trace

Description

Number of available senders: 1

Displays the number of concurrent senders available for the given Sender Service.

sender manager internal queue size: 0

Displays the items available in the internal sender queue.

Info

This value helps detect bottlenecks and needs to increase the performance of data delivery to Devo. This last can be made by increasing the concurrent senders.

Total number of messages sent: 44, messages sent since "2022-06-28 10:39:22.511671+00:00": 21 (elapsed 0.007 seconds)

Displayes the number of events from the last time and following the given example, the following conclusions can be obtained:

  • 44 events were sent to Devo since the collector started.

  • The last checkpoint timestamp was 2022-06-28 10:39:22.511671+00:00.

  • 21 events where sent to Devo between the last UTC checkpoint and now.

  • Those 21 events required 0.007 seconds to be delivered.

Info

By default these traces will be shown every 10 minutes.

Expand
titleCheck memory usage

To check the memory usage of this collector, look for the following log records in the collector which are displayed every 5 minutes by default, always after running the memory-free process.

  • The used memory is displayed by running processes and the sum of both values will give the total used memory for the collector.

  • The global pressure of the available memory is displayed in the global value.

  • All metrics (Global, RSS, VMS) include the value before freeing and after previous -> after freeing memory

Code Block
INFO InputProcess::MainThread -> [GC] global: 20.4% -> 20.4%, process: RSS(34.50MiB -> 34.08MiB), VMS(410.52MiB -> 410.02MiB)
INFO OutputProcess::MainThread -> [GC] global: 20.4% -> 20.4%, process: RSS(28.41MiB -> 28.41MiB), VMS(705.28MiB -> 705.28MiB)
Info

Differences between RSS and VMS memory usage:

  • RSS is the Resident Set Size, which is the actual physical memory the process is using

  • VMS is the Virtual Memory Size which is the virtual memory that process is using

Expand
titleEnable/disable the logging debug mode

Sometimes it is necessary to activate the debug mode of the collector's logging. This debug mode increases the verbosity of the log and allows you to print execution traces that are very helpful in resolving incidents or detecting bottlenecks in heavy download processes.

  • To enable this option you just need to edit the configuration file and change the debug_status parameter from false to true and restart the collector.

  • To disable this option, you just need to update the configuration file and change the debug_status parameter from true to false and restart the collector.

For more information, visit the configuration and parameterization section corresponding to the chosen deployment mode.

Expand
titleTroubleshooting

This collector has different security layers that detect both an invalid configuration and abnormal operation. This table will help you detect and resolve the most common errors.

ErrorType

Error Id

Error Message

Cause

Solution

AwsModuleDefinitionError

1

"{module_properties_key_path}" mandatory property is missing or empty

module_properties is not present in collector definitions file.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

2

"{module_properties_key_path}" property must be a dictionary

module_properties is not a dictionary type data structure.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

3

"{module_properties_key_path}.tag_base" mandatory property is missing or empty

tag_base is not present in collector definitions file.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

4

"{module_properties_key_path}.tag_base" property must be a string

tag_base is not of type string.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

5

module_properties_key_path}.tag_base" property must have {event_type}, {account_id}, {region_id} and {format_version} placeholders

tag_base does not literally contain all those placeholders, and they are required.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

6

"{module_properties_key_path}.tag_base" property is containing some unexpected placeholders, the allowed ones are: ["event_type", "account_id", "region_id", "format_version", "environment", "service_name"]

tag_base has an unauthorized placeholder or is not correctly built.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

7

"{module_properties_key_path}.event_type_default" mandatory property is missing or empty

event_type_default is not present in the collector definitions file.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

8

"{module_properties_key_path}.event_type_default" property must be a string

event_type_default is not of type string.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

26

"{module_properties_key_path}.event_type_source_field_name" mandatory property is missing or empty

event_type_source_field_name is not present in the collector definitions file.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

27

"{module_properties_key_path}.event_type_source_field_name" property must be a boolean

event_type_source_field_name is not of type boolean.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

26

"{module_properties_key_path}.event_type_extracting_regex" mandatory property is missing or empty

event_type_extracting_regex is not present in the collector definitions file.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

27

"{module_properties_key_path}.event_type_extracting_regex" property must be a boolean

event_type_extracting_regex is not of type boolean.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

5

"{module_properties_key_path}.event_type_extracting_regex" property is not a valid regular expression

event_type_extracting_regex is not a valid Regular Expression.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

7

"{module_properties_key_path}.enable_auto_event_type" mandatory property is missing or empty

enable_auto_event_type is not present in the collector definitions file.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

8

"{module_properties_key_path}.enable_auto_event_type" property must be a string

enable_auto_event_type is not of type string.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

9

"{module_properties_key_path}.enable_auto_event_type_config_key" mandatory property is missing or empty

enable_auto_event_type_config_key is not present in the collector definitions file.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

10

"{module_properties_key_path}.enable_auto_event_type_config_key" property must be a string

enable_auto_event_type_config_key is not of type string.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

31

"{module_properties_key_path}.event_type_processor_mapping" property should be a dictionary.

event_type_processor_mapping is not of type dictionary.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

32

"{module_properties_key_path}.event_type_processor_mapping" exists but it is empty.

event_type_processor_mapping cannot be empty, and it is.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

33

"{module_properties_key_path}.event_type_processor_mapping.{processor_name}.processor_class" mandatory property is missing or empty

processor_class does not exist or is empty.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

34

"{module_properties_key_path}.event_type_processor_mapping.{processor_name}.processor_class" should be a string

processor_class is not of type string.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

35

"{module_properties_key_path}.event_type_processor_mapping.{processor_name}.tagging" mandatory property is missing or empty

tagging is not present in the collector definitions file.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

36

"{module_properties_key_path}.event_type_processor_mapping.{processor_name}.tagging" should be a string

tagging is not of type string.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

10

"{module_properties_key_path}.sqs_queue_custom_name_key" mandatory property is missing or empty

sqs_queue_custom_name_key is not present in the collector definitions file.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

11

"{module_properties_key_path}.sqs_queue_custom_name_key" property must be a string

sqs_queue_custom_name_key is not of type string.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

12

"{module_properties_key_path}.sqs_queue_required_default_name" property must be a boolean

sqs_queue_required_default_name is not of type boolean.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

13

"{module_properties_key_path}.sqs_queue_default_name" mandatory property is missing or empty

sqs_queue_default_name is not present in the collector definitions file.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

14

"{module_properties_key_path}.sqs_queue_default_name" property must be a string

sqs_queue_default_name is not of type string.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

15

"{module_properties_key_path}.sqs_queue_default_name" property must have {input_id} placeholder

sqs_queue_default_name does not have the required {input_id} placeholder.

This is an internal issue. Please, contact with Devo Support team.

AwsInputConfigurationError

1

"{input_config_key_path}" mandatory property is missing or empty

The inputs data structure is missing or empty in the configuration file.

Go to the “Running the collector on…“ section in the documentation and check the data structure required for the inputs.

AwsInputConfigurationError

2

"{input_config_key_path}" property must be a dictionary

The inputs data structure is not of type dictionary.

Go to the “Running the collector on…“ section in the documentation and check the data structure required for the inputs.

AwsServiceConfigurationError

1

"{service_config_key_path}" mandatory property is missing or empty

The services data structure is missing or empty in the configuration file.

Go to the “Running the collector on…“ section in the documentation and check the data structure required for the services.

AwsServiceConfigurationError

2

"{service_config_key_path}" property must be a dictionary

The services data structure is not of type dictionary.

Go to the “Running the collector on…“ section in the documentation and check the data structure required for the services.

AwsServiceConfigurationError

3

"{service_config_key_path}.tag" property must be a string

tag is not of type string.

Change the tag parameter to be a string.

AwsServiceConfigurationError

4

"{service_config_key_path}.{sqs_queue_custom_name_key}" mandatory property is missing or empty

The parameter indicated in the error message is not present in the configuration file.

Add the indicated parameter to the configuration file.

AwsServiceConfigurationError

5

"{service_config_key_path}.{sqs_queue_custom_name_key}" property must be a string

The parameter indicated in the error message is not of type string.

Make the indicated parameter be a string.

AwsServiceConfigurationError

6

"{service_config_key_path}.{sqs_queue_custom_name_key}" property must be a string

The parameter indicated in the error message is not of type string.

Make the indicated parameter be a string.

AwsServiceConfigurationError

7

"{service_config_key_path}.{enable_auto_event_type_config_key}" property must be a string

The parameter indicated in the error message is not of type string.

Make the indicated parameter be a string.

Common for all the services using the S3+SQS pipeline

ErrorType

Error Id

Error Message

Cause

Solution

AwsModuleDefinitionError

1

"{module_properties_key_path}" mandatory property is missing or empty

module_properties is not present in collector definitions file.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

2

"{module_properties_key_path}" property must be a dictionary

module_properties is not a dictionary type data structure.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

10

"{module_properties_path}.start_time_regex" mandatory property is missing or empty

start_time_regex is not present at collector definition file.

This is an internal issue. Please, contact with Devo Support tea

AwsModuleDefinitionError

11

"{module_properties_path}.start_time_regex" property must be a string

start_time_regex is of a type other than string.

This is an internal issue. Please, contact with Devo Support tea

AwsModuleDefinitionError

12

"{module_properties_path}.start_time_regex" property is not a valid regular expression

start_time_regex is not a valid Regular Expression.

This is an internal issue. Please, contact with Devo Support tea

AwsModuleDefinitionError

21

"{sqs_s3_processor_properties_key_path}" mandatory property is missing or empty

The parameter indicated in the error message is not present in collector definition file.

This is an internal issue. Please, contact with Devo Support tea

AwsModuleDefinitionError

22

"{sqs_s3_processor_properties_key_path}" property must be a dictionary

The parameter indicated in the error message is not of type dictionary.

This is an internal issue. Please, contact with Devo Support tea

AwsModuleDefinitionError

26

"{sqs_s3_processor_properties_key_path}.class_name" mandatory property is missing or empty

class_name is empty or is not present in the collector definition file.

This is an internal issue. Please, contact with Devo Support tea

AwsModuleDefinitionError

27

"{sqs_s3_processor_properties_key_path}.class_name" property must be a string

class_name is not of type string.

This is an internal issue. Please, contact with Devo Support tea

AwsModuleDefinitionError

10

"{module_properties_key_path}.sqs_queue_custom_name_key" mandatory property is missing or empty

sqs_queue_custom_name_key is not present in the collector definitions file.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

11

"{module_properties_key_path}.sqs_queue_custom_name_key" property must be a string

sqs_queue_custom_name_key is not of type string.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

12

"{module_properties_key_path}.sqs_queue_required_default_name" property must be a boolean

sqs_queue_required_default_name is not of type boolean.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

13

"{module_properties_key_path}.sqs_queue_default_name" mandatory property is missing or empty

sqs_queue_default_name is not present in the collector definitions file.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

14

"{module_properties_key_path}.sqs_queue_default_name" property must be a string

sqs_queue_default_name is not of type string.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

15

"{module_properties_key_path}.sqs_queue_default_name" property must have {input_id} placeholder

sqs_queue_default_name does not have the required {input_id} placeholder.

This is an internal issue. Please, contact with Devo Support team.

AwsInputConfigurationError

1

"{input_config_key_path}" mandatory property is missing or empty

The inputs data structure is missing or empty in the configuration file.

Go to the “Running the collector on…“ section in the documentation and check the data structure required for the inputs.

AwsInputConfigurationError

2

"{input_config_key_path}" property must be a dictionary

The inputs data structure is not of type dictionary.

Go to the “Running the collector on…“ section in the documentation and check the data structure required for the inputs.

AwsServiceConfigurationError

1

"{service_config_key_path}" mandatory property is missing or empty

The services data structure is missing or empty in the configuration file.

Go to the “Running the collector on…“ section in the documentation and check the data structure required for the services.

AwsServiceConfigurationError

2

"{service_config_key_path}" property must be a dictionary

The services data structure is not of type dictionary.

Go to the “Running the collector on…“ section in the documentation and check the data structure required for the services.

AwsServiceConfigurationError

3

"{service_config_key_path}.tag" property must be a string

tag is not of type string.

Change the tag parameter to be a string.

AwsServiceConfigurationError

4

"{service_config_key_path}.{sqs_queue_custom_name_key}" mandatory property is missing or empty

The parameter indicated in the error message is not present in the configuration file.

Add the indicated parameter to the configuration file.

AwsServiceConfigurationError

5

"{service_config_key_path}.{sqs_queue_custom_name_key}" property must be a string

The parameter indicated in the error message is not of type string.

Make the indicated parameter be a string.

AwsServiceConfigurationError

6

"{service_config_key_path}.{sqs_queue_custom_name_key}" property must be a string

The parameter indicated in the error message is not of type string.

Make the indicated parameter be a string.

AwsServiceConfigurationError

7

"{service_config_key_path}.{enable_auto_event_type_config_key}" property must be a string

The parameter indicated in the error message is not of type string.

Make the indicated parameter be a string.

AwsServiceConfigurationError

7

"{service_config_key_path}.start_time" property must be a string

start_time is not of type string.

Change the start_time parameter to be a string.

AwsServiceConfigurationError

8

The property "{service_config_key_path}.start_time" from configuration is having a wrong format, expected pattern: "{start_time_regex}"

start_time parameter does not match the Regular Expression.

Change the start_time parameter to march the Regular Expression.

AwsQueueException

0

Queue "{sqs_queue_name}" used by service "{service_name}" in "{submodule_config}" region is not available: reason: {reason}

The queue indicated in the error message is not available.

Check the next things:

  • Name of the queue.

  • The queue exists in the indicated region.

  • Read carefully the reason the error message is returning.

Audit (via API)

ErrorType

Error Id

Error Message

Cause

Solution

AwsModuleDefinitionError

1

"{module_properties_key_path}" mandatory property is missing or empty

module_properties is not present in collector definitions file.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

2

"{module_properties_key_path}" property must be a dictionary

module_properties is not a dictionary type data structure.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

3

"{module_properties_key_path}.tag_base" mandatory property is missing or empty

tag_base is not present in collector definitions file.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

4

"{module_properties_key_path}.tag_base" property must be a string

tag_base is not of type string.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

5

module_properties_key_path}.tag_base" property must have {event_type}, {account_id}, {region_id} and {format_version} placeholders

tag_base does not literally contain all those placeholders, and they are required.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

6

"{module_properties_key_path}.tag_base" property is containing some unexpected placeholders, the allowed ones are: ["event_type", "account_id", "region_id", "format_version", "environment", "service_name"]

tag_base has an unauthorized placeholder or is not correctly built.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

7

"{module_properties_key_path}.event_type_default" mandatory property is missing or empty

event_type_default is not present in the collector definitions file.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

8

"{module_properties_key_path}.event_type_default" property must be a string

event_type_default is not of type string.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

9

"{module_properties_key_path}.enable_auto_event_type" mandatory property is missing or empty

enable_auto_event_type is not present in the collector definitions file.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

10

"{module_properties_key_path}.enable_auto_event_type" property must be a string

enable_auto_event_type is not of type string.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

11

"{module_properties_key_path}.enable_auto_event_type_config_key" mandatory property is missing or empty

enable_auto_event_type_config_key is empty or is not present in the collector definitions file.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

12

"{module_properties_key_path}.enable_auto_event_type_config_key" property must be a string

enable_auto_event_type_config_key is not of type string.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

13

"{module_properties_key_path}.start_time_regex" mandatory property is missing or empty

start_time_regex is empty or is not present in the collector definition file.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

14

"{module_properties_key_path}.start_time_regex" property must be a string

start_time_regex is not of type string.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

15

"{module_properties_key_path}.start_time_regex" property is not a valid regular expression

start_time_regex is using a invalid Regular Expression.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

16

"{module_properties_key_path}.gap_until_now_in_minutes" mandatory property is missing or empty

gap_until_now_in_minutes is empty or not present in the collector definitions file.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

17

"{module_properties_key_path}.gap_until_now_in_minutes" property must be a string

gap_until_now_in_minutes is not of type integer.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

18

"{module_properties_key_path}.gap_until_now_in_minutes" property can not be a negative value

gap_until_now_in_minutes is having a negative value, which is not allowed.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

19

"{module_properties_key_path}.time_slot_in_hours" mandatory property is missing or empty

time_slot_in_hours is empty or is not present in the collector definitions file.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

20

"{module_properties_key_path}.time_slot_in_hours" property must be an integer

time_slot_in_hours is not of type integer.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

21

"{module_properties_key_path}.time_slot_in_hours" property can not be 0 or a negative value

time_slot_in_hours is zero or has a negative value, which is not allowed.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

22

"{module_properties_key_path}.sources" mandatory property is missing or empty

sources is empty or is not present in the collector definitions file.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

23

"{module_properties_key_path}.sources" property exists but with wrong format, only "str" or "list" values are allowed

sources is not of type string or list.

This is an internal issue. Please, contact with Devo Support team.

AwsInputConfigurationError

1

"{input_config_key_path}" mandatory property is missing or empty

The inputs data structure is missing or empty in the configuration file.

Go to the “Running the collector on…“ section in the documentation and check the data structure required for the inputs.

AwsInputConfigurationError

2

"{input_config_key_path}" property must be a dictionary

The inputs data structure is not of type dictionary.

Go to the “Running the collector on…“ section in the documentation and check the data structure required for the inputs.

AwsServiceConfigurationError

1

"{service_config_key_path}" mandatory property is missing or empty

The services data structure is missing or empty in the configuration file.

Go to the “Running the collector on…“ section in the documentation and check the data structure required for the services.

AwsServiceConfigurationError

2

"{service_config_key_path}" property must be a dictionary

The services data structure is not of type dictionary.

Go to the “Running the collector on…“ section in the documentation and check the data structure required for the services.

AwsServiceConfigurationError

3

"{service_config_key_path}.tag" property must be a string

tag is not of type string.

Change the tag parameter to be a string.

AwsServiceConfigurationError

4

"{service_config_key_path}.sources" property exists but with wrong format, only "str" or "list" values are allowed

sources is not of type string or list.

Change the sources parameter to be a string or a list.

AwsServiceConfigurationError

5

"{service_config_key_path}.gap_until_now_in_minutes" property must be an integer

gap_until_now_in_minutes is not of type integer.

Change the gap_until_now_in_minutes parameter to be an integer.

AwsServiceConfigurationError

6

"{service_config_key_path}.start_time" property must be a string

start_time is not of type string.

Change the start_time parameter to be a string.

AwsServiceConfigurationError

7

The property "{service_config_key_path}.start_time" from configuration is having a wrong format, expected pattern: "{start_time_regex}"

start_time is using an invalid Regular Expression.

Change the start_time parameter to match the Regular Expression indicated in the error message.

AwsServiceConfigurationError

8

"{service_config_key_path}.drop_event_names" property must be a list

drop_event_names is not of type list.

Change the drop_event_names parameter to be a list.

AwsServiceConfigurationError

9

"{service_config_key_path}.{enable_auto_event_type_config_key}" property must be a string

The parameter indicated by the error message is not of type string.

Change the parameter indicated by the error message to be a string.

AwsServiceConfigurationError

10

"{service_config_key_path}.time_slot_in_hours" property must be integer

time_slot_in_hours is not of type integer.

Change the time_slot_in_hours parameter to be an integer.

Custom Logs

ErrorType

Error Id

Error Message

Cause

Solution

AwsInputConfigurationError

0

Mandatory property "requests_per_second" is missing (located at: aws.request_per_second)

requests_per_second is not present at configuration file.

Add requests_per_second to the configuration file.

AwsModuleDefinitionError

1

"{module_properties_key_path}" mandatory property is missing or empty

module_properties is not present in collector definitions file.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

2

"{module_properties_key_path}" property must be a dictionary

module_properties is not a dictionary type data structure.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

3

"{module_properties_key_path}.tag_base" mandatory property is missing or empty

tag_base is not present in collector definitions file.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

4

"{module_properties_key_path}.tag_base" property must be a string

tag_base is not of type string.

This is an internal issue. Please, contact with Devo Support team.

AwsServiceDefinitionException

5

module_properties_key_path}.tag_base" property must have {event_type}, {account_id}, {region_id} and {format_version} placeholders

tag_base does not literally contain all those placeholders, and they are required.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

6

"{module_properties_key_path}.tag_base" property is containing some unexpected placeholders, the allowed ones are: ["event_type", "account_id", "region_id", "format_version", "environment", "service_name"]

tag_base has an unauthorized placeholder or is not correctly built.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

7

"{module_properties_key_path}.start_time_regex" mandatory property is missing or empty

start_time_regex is not present or is empty in the collector definitions file.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

8

"{module_properties_key_path}.start_time_regex" property must be a string

start_time_regex is not of type string.

This is an internal issue. Please, contact with Devo Support team.

AwsModuleDefinitionError

9

"{module_properties_key_path}.start_time_regex" property is not a valid regular expression

start_time_regex is not a valid Regular Expression.

This is an internal issue. Please, contact with Devo Support team.

AwsInputConfigurationError

1

"{input_config_key_path}" mandatory property is missing or empty

The inputs data structure is missing or empty in the configuration file.

Go to the “Running the collector on…“ section in the documentation and check the data structure required for the inputs.

AwsInputConfigurationError

2

"{input_config_key_path}" property must be a dictionary

The inputs data structure is not of type dictionary.

Go to the “Running the collector on…“ section in the documentation and check the data structure required for the inputs.

AwsServiceConfigurationError

1

"{service_config_key_path}" mandatory property is missing or empty

The services data structure is missing or empty in the configuration file.

Go to the “Running the collector on…“ section in the documentation and check the data structure required for the services.

AwsServiceConfigurationError

2

"{service_config_key_path}" property must be a dictionary

The services data structure is not of type dictionary.

Go to the “Running the collector on…“ section in the documentation and check the data structure required for the services.

AwsServiceConfigurationError

3

"{service_config_key_path}.tag" property must be a string

tag is not of type string.

Change the tag parameter to be a string.

AwsServiceConfigurationError

43

"{service_config_key_path}.use_first_optimized_retrieval" property must be a boolean

use_first_optimized_retrieval is not of type boolean.

Change the use_first_optimized_retrieval parameter to be a boolean.

AwsServiceConfigurationError

1

"{service_config_key_path}.log_group" mandatory property is missing or empty

log_group is empty or is not present in the configuration file.

Add the log_group parameter to the configuration file.

AwsServiceConfigurationError

1

"{service_config_key_path}.log_group" property must be a string

log_group is not of type string.

Change the log_group parameter to be a string.

AwsServiceConfigurationError

2

"{service_config_key_path}.start_time" property must be a string

start_time is not of type string.

Change the start_time parameter to be a string.

AwsServiceConfigurationError

1

The property "{service_config_key_path}.start_time" from configuration is having a wrong format, expected pattern: "{start_time_regex}"

start_time is not matching the pattern indicated in the error message.

Make start_time match the pattern indicated in the error message.

Metrics

ErrorType

Error Id

Error Message

Cause

Solution

AwsServiceDefinitionException

1

"{module_properties_key_path}" mandatory property is missing or empty

module_properties is not present in collector definitions file.

This is an internal issue. Please, contact with Devo Support team.

AwsServiceDefinitionException

2

"{module_properties_key_path}" property must be a dictionary

module_properties is not a dictionary type data structure.

This is an internal issue. Please, contact with Devo Support team.

AwsServiceDefinitionException

5

"{module_properties_key_path}.tag_base" mandatory property is missing or empty

tag_base is not present in collector definitions file.

This is an internal issue. Please, contact with Devo Support team.

AwsServiceDefinitionException

6

"{module_properties_key_path}.tag_base" property must be a string

tag_base is not of type string.

This is an internal issue. Please, contact with Devo Support team.

AwsServiceDefinitionException

1

"{module_properties_key_path}.metric_namespace" mandatory property is missing or empty

metric_namespace is empty or is not present in collector definitions.

This is an internal issue. Please, contact with Devo Support team.

AwsServiceDefinitionException

1

"{module_properties_key_path}.metric_namespace" property must be a list

metric_namespace is not of type list.

This is an internal issue. Please, contact with Devo Support team.

AwsInputConfigurationError

1

"{input_config_key_path}" mandatory property is missing or empty

The inputs data structure is missing or empty in the configuration file.

Go to the “Running the collector on…“ section in the documentation and check the data structure required for the inputs.

AwsInputConfigurationError

1

"{input_config_key_path}" property must be a dictionary

The inputs data structure is not of type dictionary.

Go to the “Running the collector on…“ section in the documentation and check the data structure required for the inputs.

AwsServiceConfigurationError

1

"{service_config_key_path}" mandatory property is missing or empty

The services data structure is missing or empty in the configuration file.

Go to the “Running the collector on…“ section in the documentation and check the data structure required for the services.

AwsServiceConfigurationError

8

"{service_config_key_path}" property must be a dictionary

The services data structure is not of type dictionary.

Go to the “Running the collector on…“ section in the documentation and check the data structure required for the services.

AwsServiceConfigurationError

8

"{service_config_key_path}.tag" property must be a string

tag is not of type string.

Change the tag parameter to be a string.

AwsServiceConfigurationError

0

Mandatory property "metric_namespaces" is missing

metric_namespaces is not present in configuration file.

Add metric_namespaces to the configuration file.

AwsServiceConfigurationError

0

Mandatory property "metric_namespaces" property must be a list

metric_namespaces is not of type list.

Change the metric_namespaces parameter to be a list.

AwsServiceConfigurationError

1

When a service uses "metrics" type, the property "request_period_in_seconds" must have one of the following values: 1, 5, 10, 30, 60, or any multiple of 60

request_period_in_seconds is using a value that is not allowed.

Change the request_period_in_seconds parameter to match one of this values: 1, 5, 10, 30, 60, or any multiple of 60.

AwsServiceConfigurationError

4

"start_time" property must be a string

start_time is not of type string.

Change the start_time parameter to be a string.

AwsServiceConfigurationError

4

The property "start_time" from configuration is having a wrong format, expected: YYYY-mm-ddTHH:MM:SSZ

start_time is using an incorrect format.

Change the start_time to match the format indicated in the error message.

Change log for v1.x.x

Release

Released on

Release type

Details

Recommendations

v1.4.0

Status
colourPurple
titleNEW FEATURE

Status
colourGreen
titleIMPROVEMENT

Status
colourRed
titleBUG FIX

New features:

  • CrossAccount authentication method is now available improving the way in which the credentials are shared when the collector is running in the Collector Service.

Improvements:

  • The audit-events-all service (type audits_api) has been enhanced to allow requesting events older than 500 days.

Bug Fixes:

  • Fixed a bug that raised a KeyError when the optional param event_type_processor_mapping was not defined running service-events-all service.

Upgrade

v1.4.1

Status
colourRed
titleBUG FIX

Bug Fixes:

  • Fixed a bug that prevented the use of the Assumed Role authentication method.

  • Fixed a bug that prevented session renewal when using any of the Assume Authentication methods:

    • Assume Role

    • Cross Account

Recommended versio