Service description

Amazon Web Services (AWS) provides on-demand cloud computing platforms and APIs to individuals and companies. Each available AWS service generates information related to different aspects of its functionality. The available data types include service events, audit events, metrics, and logs.

You can use the AWS collector to retrieve data from the AWS APIs and send it to your Devo domain. Once the gathered information arrives at Devo, it will be processed and included in different tables in the associated Devo domain so users can analyze it.

Data source description

From the monitoring point of view, AWS generates the following types of information:

Information that happens at a specific timestamp. This information will always be linked to that timestamp, and can be categorized into two different subtypes:

Service events

The different available services usually generate information related to their internal behaviors, such as A virtual machine has been started, A new file has been created in an S3 bucket or An AWS lambda function has been invoked.

Note that this type of event can be triggered with no human interaction. These kinds of events are managed by the CloudWatch Events service (CWE). Recently, AWS has created a new service called Amazon EventBridge that will replace the CWE service.
The findings detected by AWS Security Hub are also managed by CloudWatch Events (CWE).

Audit events

These events are more specific because they need human interaction no matter the way used to retrieve them (API, web interaction, or even CLI command).

These events are managed by the CloudTrail service.

According to the standard definition, this kind of information is usually generated at the exact moment it is requested because it is typically a query related to the status of a service (everything inside AWS is considered a service).

AWS makes something slightly different because it generates metrics information every N time slots, such as 1 min, 5 min, 30 min, 1h, etc., even if no one makes a request.

This kind of information is managed by the CloudWatch Metrics service (CWM).

Logs can be defined as information with a non-fixed structure that is sent to one of the available logging services. These services are CloudWatch Logs and S3.

There are some very customizable services, such as AWS Lambda, or even any developed application which is deployed inside an AWS virtual machine (EC2), that can generate custom log information. This kind of information is managed by the CloudWatch Logs service (CWL) and also by the S3 service.

There are also some other services that can generate logs with a fixed structure, such as VPC Flow Logs or CloudFront Logs. These kinds of services require one special way of collecting their data.

Some services generate information that can be sent to different targets at the same time, for example, the CloudTrail service generates audit-related information. This information is really an "audit event", but it can be treated as a "simple event" and being sent to the Cloudwatch Events service. Also, it can be sent as a string “logline” to the Cloudwatch Logs service or sent as a file to a bucket inside the S3 service.

  • CloudWatch Events is in the process of changing its name, the new one is Amazon EventBridge.

  • Almost all services that generate Service events usually send them to Cloudwatch Events service (CWE). It could be said that 90% of services use CloudWatch Events (the same service events are also sent to the new service called Amazon EventBridge).

  • Cloudwatch Events (CWE), Cloudwatch Metrics (CWM), and Cloudwatch Logs (CWL) are considered as different services.

Setup

Some manual actions are necessary in order to get all the required information/services and allow the Devo collector to gather the information from AWS.

The following sections describe how to get the required AWS credentials and how to proceed with the different required setups depending on the gathered information type.

Credentials

Because there are several options about how to create the credentials, they will be detailed only in two different approaches.

There are several available options to define credentials, but we will only cover some of them.

It’s recommended to have available or create the following IAM policies before the creation of the IAM user that will be used for the AWS collector.

Source type

AWS Data Bus

Recommended policy name

Variant

 

Service events

CloudWatch Events

devo-cloudwatch-events

All resources

It’s not required the creation of any new policy due to there are not needed any permissions

Audit events

CloudTrail API

devo-cloudtrail-api

All resources

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "cloudtrail:LookupEvents",
            "Resource": "*"
        }
    ]
}

 

 

Specific resource

There is no way for limiting the accessed resources

CloudTrail SQS+S3

devo-cloudtrail-s3

All resources

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "s3:GetObject",
            "Resource": "*"
        }
    ]
}

 

 

 

Specific S3 bucket

Note that the value for the property called Resource should be changed with the proper value

It very important the /* string at the end of each bucket name

 

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "s3:GetObject",
            "Resource": [
                "arn:aws:s3:::devo-cloudtrail-storage-bucket1/*",
                "arn:aws:s3:::devo-cloudtrail-storage-bucket2/*"
            ]
        }
    ]
}

Metrics

CloudWatch Metrics

devo-cloudwatch-metrics

All resources

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "cloudwatch:GetMetricData",
                "cloudwatch:ListMetrics"
            ],
            "Resource": "*"
        }
    ]
}

Specific resource

There is no way for limiting the accessed resources

Logs

CloudWatch Logs

devo-cloudwatch-logs

All log groups

Note that the value for property Resource should be adapted with the proper account id value.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "logs:DescribeLogGroups",
                "logs:DescribeLogStreams",
                "logs:FilterLogEvents"
            ],
            "Resource": "arn:aws:logs:*:936082584952:log-group:*"
        }
    ]
}

Specific log groups

Note that values inside the Resources property are only examples and they should be changed with the proper values.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "logs:DescribeLogGroups",
                "logs:DescribeLogStreams",
                "logs:FilterLogEvents"
            ],
            "Resource": [
                "arn:aws:logs:*:936082584952:log-group:/aws/events/devo-cloudwatch-test-1:*",
                "arn:aws:logs:*:936082584952:log-group:/aws/events/devo-cloudwatch-test-2:*"
            ]
        }
    ]
}

Logs to S3 + SQS

devo-vpcflow-logs

All resources

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "s3:GetObject",
            "Resource": "*"
        }
    ]
}

Specific resource

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::vpc-flowlogs-test1/*"
        }
    ]
}

Depending on which source types are collected, one or more of the policies described above will be used.

Once the required policies are created, each one must be associated with an IAM user. To create it, visit the AWS Console and log in with a user account with enough permissions to create and access AWS structures:

  1. Go to IAM → Users.

  2. Click the Add users button.

  3. Enter the required value in the field User name.

  4. Enable the checkbox Access key - Programmatic access.

  5. Click the Next: Permissions button.

  6. Choose the box with the text Attach existing policies directly.

  7. Use the search box to locate all your required policies and check the boxes at the left of each policy name.

  8. Click Next: Tags. Optionally, add any desired tags to the new user.

  9. Click Next: Review.

  10. Click the Create user button.

  11. A new Access Key ID and Secret Access Key will be created. You can click the Download .csv button to download a copy to your local or you can copy the values shown on the screen. These will be used as the AWS collector credentials.

  12. Finally, click the Close button.

Service events

All the service events that are generated on AWS are managed by Cloudwatch. However, Devo’s AWS collector offers two different services that collect Cloudwatch events:

The AWS services generate service events per region, so the next instructions should be applied in each region where the collecting of information is required (use the same values for all your configured regions).

In order to collect these service events, there are some structures that must be created: one FIFO queue in the SQS service and one Rule+Target in the CloudWatch service.

If the auto-setup functionality is enabled in the configuration and the related credentials have enough permissions to create all the required AWS structures, the following steps are not required.

For a manual creation of these required structures, follow the next steps (click to expand):

  1. Go to Simple Queue Service and click Create queue.

  2. In the Details section, choose the FIFO queue type and set the Name field value you prefer (it must finish with the .fifo suffix).

  3. In the Configuration section, set the Message retention period field value to 5 Days. Be sure that the Content-based deduplication checkbox is marked and leave the rest of the options with their default values.

  4. In the Access policy section, choose the method Basic and select the option Only the queue owner to receive and send permissions.

  5. Optionally, in the Tags section, you can create a tag with Key usedBy and Value devo-collector.

  6. Click Create queue.

  1. Go to CloudWatch → Rules and click Create rule.

  2. In the Event Source section, select Event Pattern and Build event pattern to match events by service.

  3. In the Service Name field, enter the service to be monitored (Check the note below for Security Hub Findings)

  4. In the Event Type field, choose All Events (Check the note below for Security Hub Findings)

  5. In the Targets section, click Add target and choose SQS queue as the target type.

  6. In the Queue dropdown, choose the previously created queue.

  7. In the Message group ID field, set the value devo-collector.

  8. Then, click Configure details.

  9. In the Rule definition section, set the Name you prefer. Be sure that State checkbox is marked.

To retrieve Security Hub Findings, select Security Hub in the Service Name field, and Security Hub Findings - Custom Action in the Event Type field.

Audit events

No actions are required in the Cloudtrail service to retrieve this kind of information.

Metrics

No actions are required in the CloudWatch Metrics service to retrieve this kind of information.

Logs

Logs can be collected from different services. Depending on the service type, you may need to apply some settings on AWS:

CloudWatch Logs

No actions are required in this service to retrieve this kind of information.

VPC Flow Logs

Before enabling the generation of these logs, you must create one bucket in the S3 service and one FIFO queue in the SQS service. For a manual creation of these required structures, follow these steps (click to expand):

  1. Go to Simple Queue Service and click Create queue.

  2. In the Details section, choose the Standard queue type and set the Name field value you prefer.

  3. In the Configuration section, set the Message retention period field value to 5 Days and leave the rest of the options with their default values.

  4. In the Access policy section, choose the method Advanced and replace "Principal": {"AWS":"<account_id>"} by "Principal": "*" Leave the rest of the JSON as default.

  5. Optionally, in the Tags section, you can create a tag with Key usedBy and Value devo-collector.

  6. Click Create queue.

  1. Go to S3 and click Create bucket.

  2. Set the preferred value in the Bucket name field.

  3. Choose the required Region value and click Next.

  4. Optionally, in the Tags section, you can create a tag with Key usedBy and Value devo-collector. Leave the rest of the fields with their default values and click Next.

  5. Click Create bucket.

  6. Mark the checkbox next to the previously created S3 bucket.

  7. In the popup box, click Copy Bucket ARN and save the content. You will need it later.

  8. In the S3 bucket list, click the previously created bucket name link.

  9. Click the Properties tab, then click the Events box.

  10. Click Add notification.

  11. Set the preferred value in the Name field.

  12. Select the All object create events checkbox.

  13. In the Send to field, select SQS Queue.

  14. Select the previously created SQS queue in the SQS field.

Once the required AWS structures are created, go to the VPC service and follow these steps:

  1. Select any available VPC (or create a new one).

  2. Go to the Flow Logs tab and click Create flow log.

  3. Choose the preferred Filter value and the required Maximum aggregation interval value.

  4. In the Destination field, select Send to an S3 bucket.

  5. In the S3 bucket ARN field set the ARN of the previously created S3 bucket.

  6. Make sure that the Format field has the value AWS default format.

  7. Optionally, in the Tags section, you can create a tag with Key usedBy and Value devo-collector.

  8. Finally, click Create.

CloudFront Logs

Before enabling the generation of these logs, you must create one bucket in the S3 service and one FIFO queue in the SQS service. For a manual creation of these required structures, follow these steps (click to expand):

  1. Go to Simple Queue Service and click Create queue.

  2. In the Details section, choose the Standard queue type and set the Name field value you prefer.

  3. In the Configuration section, set the Message retention period field value to 5 Days and leave the rest of the options with their default values.

  4. In the Access policy section, choose the method Advanced and replace "Principal": {"AWS":"<account_id>"} by "Principal": "*" Leave the rest of the JSON as default.

  5. Optionally, in the Tags section, you can create a tag with Key usedBy and Value devo-collector.

  6. Click Create queue.

  1. Go to S3 and click Create bucket.

  2. Set the preferred value in the Bucket name field.

  3. Choose the required Region value and click Next.

  4. Optionally, in the Tags section, you can create a tag with Key usedBy and Value devo-collector. Leave the rest of the fields with their default values and click Next.

  5. Click Create bucket.

  6. Mark the checkbox next to the previously created S3 bucket.

  7. In the popup box, click Copy Bucket ARN and save the content. You will need it later.

  8. In the S3 bucket list, click the previously created bucket name link.

  9. Click the Properties tab, then click the Events box.

  10. Click Add notification.

  11. Set the preferred value in the Name field.

  12. Select the All object create events checkbox.

  13. In the Send to field, select SQS Queue.

  14. Select the previously created SQS queue in the SQS field.

Once the required AWS structures are created, go to the CloudFront service and follow these steps:

  1. Click the ID of the target Distribution item and access the Distributing Settings options. Then, click Edit.

  2. In the Logging field, select On.

  3. In the Bucket for Logs field, enter the ARN of the previously created S3 bucket.

  4. Finally, click the Yes, Edit button.

Collector service details

The following tables show details about the predefined services available to be used in the collector configuration.

Devo collector service name

Complete service name

CloudWatch filter used

CloudTrail source filter used

Metrics namespace used

Description

Service events
(type: events)

Audit events
(type: audits)

Metrics
(type: metrics)

Logs
(type: logs)

service-events-all

All service events

{"account":["<account_id>"]}

N/A

N/A

This service will collect all service events information available in the CloudWatch service, no matter the source defined in the event.

X

X

X

audit-events-all

All audit events

N/A

all_sources

N/A

This service will collect all audit events information available in the CloudTrail service, no matter the source defined in the event.

X

X

X

metrics-all

All metrics

N/A

N/A

all_metrics_namespaces

This service will collect all metric information from CloudWatch service. Metrics from all the available metric namespaces will be retrieved.

X

X

X

<cwl_custom>

CloudWatch Logs

N/A

N/A

N/A

This service will collect the different “Log Streams” that are part of a “Log Group” from the CloudWatch Logs service. Since it is common to have more than one “Log Group” defined, this will require creating one <cwl_custom> entry per “Log Group”.

X

X

X

non-cloudwatch-logs

Non-CloudWatch Logs

N/A

N/A

N/A

This service will collect data from the following services VPC Flow Logs and CloudFront Logs.

X

X

X

sqs-cloudwatch-consumer

Service events generated by CloudWatch Events service

Check more info here.

N/A

N/A

This service will collect all Security Hub findings that have been sent to CloudWatch, no matter the source defined in the finding.

X

X

X

In the service-events-all collector service, the <account_id> string is automatically replaced with the real value.

The values entered in <cwl_custom> must be unique values.

Collector configuration details

Depending on the data type chosen for collecting, the following service definitions could be added to the configuration inside the services section. The following are common properties that all services have:

Global predefined services

These service definitions can be used for collecting in a global way the different data types available in AWS.

Service events

This is the configuration to be used when any service event needs to be collected from AWS, except Security Hub.

service-events-all:
  #tag: my.app.aws_service_events
  cloudwatch_sqs_queue_name: <queue_name>
  #auto_event_type: <bool>
  regions:
    - <region_a>
    - <region_b>
    - <region_c>

The default target table is cloud.aws.cloudwatch.events

This is the configuration to be used when Security Hub events need to be collected.

sqs-cloudwatch-consumer:
  #tag: <str>
  cloudwatch_sqs_queue_name: <queue_name>
  #auto_event_type: <bool>
  regions:
    - <region_a>
    - <region_b>
    - <region_c>

The SQS queue name is required

The default target table is cloud.aws.securityhub.findings

All audit events

There are two ways to get audit events. In case just a few events are going to be generated in the platform, using the API may be enough. However, when mid or high volumes are expected, saving those audit events in an S3 bucket would be the best choice. In this case, an SQS queue should be created to consume those events from the collector.

This is how the config file should be defined to retrieve audit events via API:

audit-events-all:
  #tag: <str with {placeholders}>
  #types:
    #- audits_api <str>
  #auto_event_type: <bool>
  #request_period_in_seconds: <int>
  #start_time: <datetime_iso8601_format>
  #drop_event_names: ["event1", "event2"] <list of str>
  regions:
    - <region_a>
    - <region_b>
    - <region_c>

Field

Type

Mandatory

Description

tag

string

no

Tag or tag format to be used. i.e.:

  • my.app.aws_audit_events

  • cloud.aws.cloudtrail.{event_type}.{account_id}.{region_id}.{collector_version}

types

list of strings (in yaml format)

no

Enable/Disable modules only when several modules per service are defined. To get audit events from API, this field should be set to audits_api.

request_period_in_seconds

integer

no

Period in seconds used between each data pulling, this value will overwrite the default value (60 seconds)

start_time

datetime

no

Datetime from which to start collecting data. It must match ISO-8601 format.

auto_event_type

boolean

no

Used to enable the auto categorization of message tagging.

drop_event_names

list of strings

no

If the value in eventName field matches any of the values in this field, the event will be discarded.

i.e. if this parameter is populated with the next values ["Decrypt", "AssumeRole"], and the value of eventName field is Decrypt or AssumeRole, the event will be discarded.

regions

list of strings (in yaml format)

yes, if defined in the “Collector definitions”.

Property name (regions) should be aligned with the one defined in the submodules_property property from the “Collector definitions”

On the other hand, if S3 + SQS is the chosen option to get the audit events, the config file should match the following format:

audit-events-all:
  #tag: <str with {placeholders}>
  #types:
    #- audits_s3 <str>
  #request_period_in_seconds: <int>
  #start_time: <datetime_iso8601_format>
  #auto_event_type: <bool>
  audit_sqs_queue_name: <str>
  #s3_file_type_filter: <str (RegEx)>
  #use_region_and_account_id_from_event: <bool>
  regions:
    - region_a <str>
    - region_b <str>
    - region_c <str>

The default target table is cloud.aws.cloudwatch.events

Field

Type

Mandatory

Description

tag

string

no

Tag or tag format to be used. i.e.:

  • my.app.aws_audit_events

  • cloud.aws.cloudtrail.{event_type}.{account_id}.{region_id}.{collector_version}

types

list of strings (in yaml format)

no

Enable/Disable modules only when several modules per service are defined

request_period_in_seconds

integer

no

Period in seconds used between each data pulling, this value will overwrite the default value (60 seconds)

start_time

datetime

no

Datetime from which to start collecting data. It must match ISO-8601 format.

auto_event_type

boolean

no

Used to enable the auto categorization of message tagging.

audit_sqs_queue_name

string

yes

Name of the SQS queue to read from.

s3_file_type_filter

string

no

RegEx to retrieve proper file type from S3

use_region_and_account_id_from_event

bool

no

If true the region and account_id are taken from the event; else if false, they are taken from the account used to do the data pulling. Default: true

regions

list of strings (in yaml format)

yes, if defined in the “Collector definitions”.

Property name (regions) should be aligned with the one defined in the submodules_property property from the “Collector definitions”

All metrics

metrics-all:
  #tag: my.app.aws_metrics
  regions:
    - <region_a>
    - <region_b>
    - <region_c>

The default target table is cloud.aws.cloudwatch.metrics

CloudWatch Logs

An entry per Log Stream that wanted to be processed must be defined. In this example, two different entries have been created (cwl_1, cwl_2) for processing the Log Streams called /aws/log_stream_a and /aws/log_stream_b

cwl_1:
  #tag: my.app.aws_cwl
  types:
    -logs
  log_group: /aws/log_stream_a
  regions:
    - <region_a>
    - <region_b>
    - <region_c>
cwl_2:
  #tag: my.app.aws_cwl
  types:
    -logs
  log_group: /aws/log_stream_b
  regions:
    - <region_a>
    - <region_b>
    - <region_c>

As shown in the examples, the types list must be fixed with the log values.

The default target table is cloud.aws.cloudwatch.logs

Non-CloudWatch Logs

non-cloudwatch-logs:
  #tag: my.app.aws_cwl
  #vpcflowlogs_sqs_queue_name: <custom_queue_a>
  #cloudfront_sqs_queue_name: <custom_queue_b>
  #auto_event_type: <bool>
  regions:
    - <region_a>
    - <region_b>
    - <region_c>

The default target tables are cloud.aws.vpc.flowlogs and cloud.aws.cloudfront.

The default existing expected SQS queue names for this service are devo-ncwl-vpcfl-<short_unique_identifier> and devo-ncwl-cfl-<short_unique_identifier>

The properties vpcflowlogs_sqs_queue_name and cloudfrontlogs_sqs_queue_name can be used for using custom queue names instead of the default expected ones

Run the collector

Once the data source is configured, you can send us the required information and we will host and manage the collector for you (Cloud collector), or you can host the collector in your own machine using a Docker image (On-premise collector).

We use a piece of software called Collector Server to host and manage all our available collectors. If you want us to host this collector for you, get in touch with us and we will guide you through the configuration.

This data collector can be run in any machine that has the Docker service available because it should be executed as a docker container. The following sections explain how to prepare all the required setup for having the data collector running.

Structure

The following directory structure should be created for being used when running the AWS collector:

<any_directory>
└── devo-collectors/
    └── aws/
        ├── certs/
        │   ├── chain.crt
        │   ├── <your_domain>.key
        │   └── <your_domain>.crt
        └── config/ 
            └── config-aws.yaml

Devo credentials

In Devo, go to Administration → Credentials → X.509 Certificates, download the Certificate, Private key and Chain CA and save them in <any directory>/devo-collectors/aws/certs. Learn more about security credentials in Devo here.

Editing the config-aws.yaml file

In the config-aws.yaml file, replace the <short_unique_identifier>, <access_key_value>, <access_secret_value>, <region_*> and <log_group_value_*> values and enter the ones that you got in the previous steps. In the <short_unique_identifier>, <region_*> and <log_group_value_*> placeholders, enter the value that you choose.

globals:
  debug: false
  id: not_used
  name: aws
  persistence:
    type: filesystem
    config:
      directory_name: state
outputs:
  devo_1:
    type: devo_platform
    config:
      address: collector-eu.devo.io
      port: 443
      type: SSL
      chain: chain.crt
      cert: <your_domain>.crt
      key: <your_domain>.key
inputs:
  aws:
    id: <short_unique_identifier>
    enabled: true
    requests_per_second: 5
    autoconfig:
      enabled: true
      refresh_interval_in_seconds: 600
    credentials:
      access_key: <access_key_value>
      access_secret: <access_secret_value>
    services:
       service-events-all:
        #tag: my.app.aws_service_events
        #sqs_queue_name: <custom_queue_name>.fifo
        regions:
          - <region_a>
          - <region_b>
          - <region_c>
      audit-events-all:
        #tag: my.app.aws_audit_events
        regions:
          - <region_a>
          - <region_b>
          - <region_c>
      metrics-all:
        #tag: my.app.aws_metrics
        regions:
          - <region_a>
          - <region_b>
          - <region_c>
      non-cloudwatch-logs:
        #tag: my.app.aws_non_cwl
        #cloudfrontlogs_sqs_queue_name: <custom_queue_name>
        #vpcflowlogs_sqs_queue_name: <custom_queue_name>
        regions:
          - <region_a>
          - <region_b>
          - <region_c>
      cwl_1:
        #tag: my.app.aws_cwl
        types:
          - logs
        log_group: <log_group_a>
        regions:
          - <region_a>
          - <region_b>
          - <region_c>
      cwl_2:
        #tag: my.app.aws_cwl
        types:
          - logs
        log_group: <log_group_b>
        regions:
          - <region_a>
          - <region_b>
          - <region_c>

By default, there are several SQS queue name patterns that the collector will try to use for retrieving information from AWS. Depending on the info type, the following patterns exist:

  • Service events: devo-<service_name>-<short_unique_identifier>.fifo. The property sqs_queue_name can be used to choose a custom name.

  • VPC Flow Logs: devo-ncwl-vpcfl-<short_unique_identifier>. The property vpcflowlogs_sqs_queue_name can be used to choose a custom name.

  • CloudFront: devo-ncwl-cfl-<short_unique_identifier>. The property cloudfrontlogs_sqs_queue_name can be used to choose a custom name.

Download the Docker image

The collector should be deployed as a Docker container. Download the Docker image of the collector as a .tgz file by clicking the link in the following table:

Collector Docker image

SHA-256 hash

collector-aws-docker-image-1.4.1.tgz

21e735b6338537632396171bd09829508949947fb672f14543ce97a475bc72b3

Use the following command to add the Docker image to the system:

gunzip -c collector-aws-docker-image-<version>.tgz | docker load

Once the Docker image is imported, it will show the real name of the Docker image (including version info). Replace "<version>" with a proper value.

The Docker image can be deployed on the following services:

  • Docker

  • Docker Compose

Docker

Execute the following command on the root directory <any_directory>/devo-collectors/aws/

docker run \
--name collector-aws \
--volume $PWD/certs:/devo-collector/certs \
--volume $PWD/config:/devo-collector/config \
--volume $PWD/state:/devo-collector/state \
--env CONFIG_FILE=config-aws.yaml \
--rm -it docker.devo.internal/collector/aws:<version>

Replace <version> with a proper value.

Docker Compose

The following Docker Compose file can be used to execute the Docker container. It must be created in the <any_directory>/devo-collectors/aws/ directory.

version: '3'
services:
  collector-aws:
    build:
      context: .
      dockerfile: Dockerfile
    image: docker.devo.internal/collector/aws:${IMAGE_VERSION:-latest}
    container_name: collector-aws
    volumes:
      - ./certs:/devo-collector/certs
      - ./config:/devo-collector/config
      - ./state:/devo-collector/state
    environment:
      - CONFIG_FILE=${CONFIG_FILE:-config-aws.yaml}