Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Table of Contents
minLevel2
maxLevel2
outlinefalse
typeflat

Overview

Logs generated by most AWS services (Cloudtrail, VPC Flows, Elastic Load Balancer, etc.) are exportable to a blob object in S3. Many other 3rd party services have also adopted this paradigm so it has become a common pattern used by many different technologies. Devo Professional Services and Technical Acceleration teams have a base-collector code that will leverage this S3 paradigm to collect logs and can be customized for different customer's different technology logs that may be stored into S3.

This documentation will go through setting up your AWS infrastructure for our collector integration to work out of the box:

  • Sending data to S3 (this guide uses Cloudtrail as a data source service)

  • Setting up S3 event notifications to SQS

  • Enabling SQS and S3 access using a cross-account IAM role

  • Gathering information to be provided to Devo for collector setup

General architecture diagram

...

Requirements

  • Access to S3, SQS, IAM, and CloudTrail services

  • Permissions to send data to S3

  • Knowledge of log format/technology type being stored into S3

Creating an S3 bucket and setting up a data feed (CloudTrail example)

The following will be set up during this section:

  • S3 bucket for data storage

  • CloudTrail trail for data logging into an S3 bucket

Create an S3 bucket

Rw ui steps macro
Rw step

Navigate to AWS Management Console and select S3.

Image Removed
Rw step

Create a new bucket that you wish for these logs or skip to the next step if using an existing bucket. Default S3 bucket permissions should be fine.

Image Removed

Set up a CloudTrail trail to log events into an S3 bucket

...

Rw step

After the bucket has been created, we will need to set up a data feed into this S3 bucket via CloudTrail. Click CloudTrail.

...

Rw step

Create a new trail following these steps:

...

Click Create trail.

...

When setting up the trail on the screen, make sure to choose the S3 bucket you want CloudTrail to send data into accordingly. If you have an existing S3 bucket, choose that box and enter your S3 bucket name. Otherwise, create a new S3 bucket here.

...

A prefix is optional but highly recommended for easier set up of S3 event notifications to different SQS queues.

...

All other options on this page are optional, but default settings do work. Check with your infra team to figure out what they want to do.

...

On the next page, you choose the log events you wish for CloudTrail to capture. At the very least, we recommend Management events be enabled. Data events and Insight events are additional charges so check with your team about this. Data events can generate A LOT of data if your account has power users of S3. Please check with your AWS team to see if these are worthwhile to track.

...

Finish up and create the trail.

...

Creating an SQS queue and enabling S3 event notifications

SQS provides the following benefits from our perspective:

  • Built in retrying on the failure of processing a message

  • Dead letter queueing (if enabled when setting up SQS queue)

  • Allows for downstream outage without loss of the state of processing

  • Allows for parallelization of workers in event of very high volume data

  • Guaranteed at least once delivery (S3 and SQS guarantees)

  • Ability to have multiple S3 buckets send events to the same SQS queue and even those in other accounts via S3 event notifications to SNS to SQS in the target account

Info

Optional - Using event otifications with SNS

Sending S3 event notifications to SNS may be beneficial/required to some teams if they are using the bucket event notifications in multiple applications. This is fully supported as long as the original S3 event notification message gets passed through SNS transparently to SQS. You will not need to follow the steps to set up event notifications to a single SQS, but could follow the Amazon documentation here to setup the following:

A brief write-up of this architecture can be found in this AWS blog. Note this will also help if you have buckets in different regions/accounts and would like one centralized technology queue for all of your logging.

Create an SQS queue for a specific service events type (i.e. CloudTrail)

In this example, we will continue by setting up an SQS queue for our CloudTrail technology logs.

Rw ui steps macro
Rw step

Navigate to the SQS console.

Rw step

Click Create queue.

Rw step

Create a Standard queue, the default configuration is fine.

Rw step

In the Access policy section, select Advanced and copy and paste the following policy replacing where {{ }} occurs.

Code Block
{
 "Version": "2012-10-17",
 "Id": "example-ID",
 "Statement": [
  {
   "Sid": "example-statement-ID",
   "Effect": "Allow",
   "Principal": {
     "Service": "s3.amazonaws.com"
   },
   "Action": [
    "SQS:SendMessage"
   ],
   "Resource": "arn:aws:sqs:{{SQS queue region}}:{{Account ID #}}:{{Queue name you are currently creating}}",
   "Condition": {
      "ArnLike": { "aws:SourceArn": "arn:aws:s3:*:*:{{Bucket name with data here}}" },
      "StringEquals": { "aws:SourceAccount": "{{Account ID # of the bucket}}" }
   }
  }
 ]
}
Info

An example resource ARN should look like this: arn:aws:sqs:us-east-1:0123456789:devo-example-sqs-queue

Rw step

The rest of the default configuration is fine, but you can set up a dead letter queue and server-side encryption, which is transparent to our side.

Rw step

Create the queue.

Rw step

Copy the URL of your newly created queue and save it, as you will need to provide Devo with this.

Image Removed

Setup S3 event notifications

Rw ui steps macro
Rw step

Navigate back to your S3 bucket with data in it.

Rw step

Click the Properties tab of the bucket.

Image Removed
Rw step

Click the Events box under Advanced settings.

Rw step

Click Create event notification

Image RemovedImage RemovedImage Removed
Rw step

Setup the event notifications similar to the following:

  • The event notification name can follow whatever naming convention you need.

  • Type of event: All object create events

  • If you put in a Prefix for your technology types, set the same here

  • The suffix should be .json.gz

  • Set SQS Queue as notifications destination

  • Select the SQS queue name of the queue you created earlier.

Rw step

Click the Save button after configuring this.

Rw step

CloudTrail trail logs should now be generating corresponding messages in the queue if all was properly configured.

Enabling SQS and S3 access using a cross-account IAM role

For allowing the Devo collector to pull in data from your AWS environment, we will need an IAM cross-account role in your account. You will have to provide this role’s ARN to Devo.

Create an IAM policy

This IAM policy will:

  • Allow the role to read messages off the SQS queue and acknowledge (delete) them off the queue after successfully processing the messages

  • Retrieve the S3 object referenced in the SQS message so that Devo can read and process the message into the system

  • Provide limited access only to specified resources (minimal permissions)

Follow the next steps to create the IAM policy:

Rw ui steps macro
Rw step

Navigate to the IAM console.

Image Removed
Rw step

Go to the Policies section.

Image Removed
Rw step

Create a policy.

Rw step

Choose the JSON method and enter in the following policy while replacing the items within {{}} (ARN’s for the S3 bucket -optionally including configured prefix- and the SQS queue setup are in the previous steps of this guide).

Code Block
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "VisualEditor0",
      "Effect": "Allow",
      "Action": [
        "s3:GetObject",
        "sqs:DeleteMessage",
        "sqs:GetQueueAttributes",
        "sqs:ChangeMessageVisibility",
        "sqs:ReceiveMessage"
     ],
     "Resource": [
       "arn:aws:sqs:<<YOUR_SQS_REGION>>:<<ACCOUNT_NUMBER>>:<<QUEUE_NAME>>",
       "arn:aws:s3:::<<BUCKET_NAME>/<<OPTIONAL_PREFIX_SCOPE_LIMIT>>/*"
     ]
   }
 ]
}

You can keep adding more resources if you have multiple SQS queues and S3 buckets that you would like Devo to pull and read from.

Info
  • If KMS encryption is active for the S3 bucket, the respective KMS key must be included as a resource within the IAM policy. Otherwise, the Devo collector will fail to pull events due to a permission error: "An error occurred (AccessDenied) when calling the GetObject operation: Access Denied".

  • The /* trailing in the S3 ARN denotes access to objects in the S3 Bucket. If missing, calls to the S3 API will result in a permission error and objects cannot be accessed by the collector.

Rw step

Give the policy a name with the naming convention that your account uses as necessary and an optional description.

Rw step

Click Create and note down the policy name you've created for the access method needed for the Devo collector's proper functioning.

Create a cross-account role

Cross-account roles let roles/users from other AWS accounts (in this case, the Devo collector server AWS Account) access to assume a role in your account. This sidesteps the need to exchange permanent credentials, as credentials are still stored separately in their respective accounts, and AWS themselves authenticates the identities. For more information, check this document.

Follow these steps to create the cross-account role:

Rw ui steps macro
Rw step

Click Roles in the IAM console, then select Create role.

Image Removed
Rw step

Create a role with the Another AWS account scope and use Account ID:837131528613

Rw step

Attach the policy you created in the previous steps (i.e.: devo-xaccount-cs-policy)

Image Removed
Rw step

Give this role a name (you will provide this to Devo)

Image Removed
Rw step

Go into the newly created role and click Trust relationshipsEdit trust relationship.

Image Removed
Rw step

Change the existing policy document to the following, which will only allow for our collector server role to access the policy.

Code Block
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::837131528613:role/devo-xaccount-cs-role"
      },
      "Action": "sts:AssumeRole",
      "Condition": {"StringEquals": {"sts:ExternalId": {{YOUR_CONFIGURED_EXTERNALID}}
    }
  ]
}
Rw step

Click Update Trust Policy to finish.

Information to be provided to Devo

At the end of this configuration process, the following tidbits of information will have to be provided to Devo for the collector setup in order to complete the integration:

  • Technology type that we will be consuming, or log format (in case the collector is pulling data from an AWS service - i.e: this guide is using CloudTrail as an example-, just the service name must be provided)

  • SQS Queue URL

  • Cross-account role ARN (i.e.: arn:aws:iam::<YOUR-ACCOUNT-ID>:role/devo-xs-collector-role) and optionally, ExternalID (if used in cross account role trust policy)

...

separatorbrackets
printablefalse

Purpose

AWS SQS can be used to send any kind of data to Devo. If the data is already located in AWS, then SQS should be used to send it to Devo. The AWS SQS collector provides superior reliability, speed, security, and flexibility.

The AWS SQS collector is commonly used to secure services like WAF, VPC, Control Tower, and CloudTrail.

Send data to Devo

There are three requirements to send data to Devo with SQS.

Data sources

Data source

Security Purpose

Collector service name

Devo table

Any

The collector can be customized to process any data.

Use a custom service only if there is no prebuilt service.

 custom_service

 All

AWS CONFIGURATION LOGS

 Cloud Resource Audit

aws_sqs_config

cloud.aws.configlogs.events

AWS ELB

 Load Balancer

aws_sqs_elb

web.aws.elb.access

AWS ALB

 Load Balancer

aws_sqs_alb

web.aws.alb.access

CISCO UMBRELLA

 DNS

aws_sqs_cisco_umbrella

sig.cisco.umbrella.dns

CLOUDFLARE LOGPUSH

 Content Distribution

aws_sqs_cloudflare_logpush

cloud.cloudflare.logpush.http

CLOUDFLARE AUDIT

 Content Distribution

aws_sqs_cloudflare_audit

cloud.aws.cloudflare.audit

CLOUDTRAIL

 AWS Audit

aws_sqs_cloudtrail

cloud.aws.cloudtrail.*

CLOUDTRAIL VIA KINESIS FIREHOSE

 AWS Audit

aws_sqs_cloudtrail_kinesis

cloud.aws.cloudtrail.*

CLOUDWATCH

 Instance Metrics

aws_sqs_cloudwatch

cloud.aws.cloudwatch.logs

CLOUDWATCH VPC

 Private Cloud Metrics

aws_sqs_cloudwatch_vpc

cloud.aws.vpc.flow

CONTROL TOWER

In most cases, use the CloudTrail service instead.

VPC Flow Logs, Cloudtrail, Cloudfront, and/or AWS config logs

aws_sqs_control_tower

 

deprecated

 

aws_sqs_fdr

edr.crowdstrike.cannon

CROWDSTRIKE FALCON DATA REPLICATOR

Antivirus

aws_sqs_fdr_large

edr.crowdstrike.cannon

GUARD DUTY

 Threat Detection

aws_sqs_guard_duty

cloud.aws.guardduty.findings

GUARD DUTY VIA KINESIS FIREHOUSE

 

aws_sqs_guard_duty_kinesis

cloud.aws.guardduty.findings

IMPERVA FLEXPROTECT

Content Delivery

aws_sqs_incapsula

cef0.imperva.incapsula

LACEWORK

 Container and Cloud

aws_sqs_lacework

monitor.lacework.[agent].*

PALO ALTO

 Firewall

aws_sqs_palo_alto

firewall.paloalto.[file-log_type]

ROUTE 53

 Domain Name Service

aws_sqs_route53

dns.aws.route53

OPERATING SYSTEM

 Windows and Unix events

aws_sqs_os

box.unix_cloudwatch

box.win_cloudwatch

SENTINEL ONE FUNNEL

 Endpoint Detections

aws_sqs_s1_funnel

edr.sentinelone.dv

S3 ACCESS

 S3 Bucket Audit

aws_sqs_s3_access

web.aws.s3.access

VPC LOGS

Private Cloud Metrics
(published without CloudWatch)

aws_sqs_vpc

cloud.aws.vpc.flow

WAF LOGS

 Firewall

aws_sqs_waf

cloud.aws.waf.logs

Devo collector features

Feature

Details

Allow parallel downloading (multipod)

allowed

Running environments

Cloud Collector App

Writes to

table