Table of Contents | ||||||||
---|---|---|---|---|---|---|---|---|
|
Overview
Logs generated by most AWS services (Cloudtrail, VPC Flows, Elastic Load Balancer, etc.) are exportable to a blob object in S3. Many other 3rd party services have also adopted this paradigm so it has become a common pattern used by many different technologies. Devo Professional Services and Technical Acceleration teams have a base-collector code that will leverage this S3 paradigm to collect logs and can be customized for different customer's different technology logs that may be stored into S3.
This documentation will go through setting up your AWS infrastructure for our collector integration to work out of the box:
Sending data to S3 (this guide uses Cloudtrail as a data source service)
Setting up S3 event notifications to SQS
Enabling SQS and S3 access using a cross-account IAM role
Gathering information to be provided to Devo for collector setup
General architecture diagram
...
Requirements
Access to S3, SQS, IAM, and CloudTrail services
Permissions to send data to S3
Knowledge of log format/technology type being stored into S3
Creating an S3 bucket and setting up a data feed (CloudTrail example)
The following will be set up during this section:
S3 bucket for data storage
CloudTrail trail for data logging into an S3 bucket
Create an S3 bucket
Rw ui steps macro | ||
---|---|---|
Navigate to AWS Management Console and select S3.
Create a new bucket that you wish for these logs or skip to the next step if using an existing bucket. Default S3 bucket permissions should be fine. |
Set up a CloudTrail trail to log events into an S3 bucket
...
Rw step |
---|
After the bucket has been created, we will need to set up a data feed into this S3 bucket via CloudTrail. Click CloudTrail.
...
Rw step |
---|
Create a new trail following these steps:
...
Click Create trail.
...
When setting up the trail on the screen, make sure to choose the S3 bucket you want CloudTrail to send data into accordingly. If you have an existing S3 bucket, choose that box and enter your S3 bucket name. Otherwise, create a new S3 bucket here.
...
A prefix is optional but highly recommended for easier set up of S3 event notifications to different SQS queues.
...
All other options on this page are optional, but default settings do work. Check with your infra team to figure out what they want to do.
...
On the next page, you choose the log events you wish for CloudTrail to capture. At the very least, we recommend Management events be enabled. Data events and Insight events are additional charges so check with your team about this. Data events can generate A LOT of data if your account has power users of S3. Please check with your AWS team to see if these are worthwhile to track.
...
Finish up and create the trail.
...
Creating an SQS queue and enabling S3 event notifications
SQS provides the following benefits from our perspective:
Built in retrying on the failure of processing a message
Dead letter queueing (if enabled when setting up SQS queue)
Allows for downstream outage without loss of the state of processing
Allows for parallelization of workers in event of very high volume data
Guaranteed at least once delivery (S3 and SQS guarantees)
Ability to have multiple S3 buckets send events to the same SQS queue and even those in other accounts via S3 event notifications to SNS to SQS in the target account
Info |
---|
Optional - Using event otifications with SNS Sending S3 event notifications to SNS may be beneficial/required to some teams if they are using the bucket event notifications in multiple applications. This is fully supported as long as the original S3 event notification message gets passed through SNS transparently to SQS. You will not need to follow the steps to set up event notifications to a single SQS, but could follow the Amazon documentation here to setup the following: A brief write-up of this architecture can be found in this AWS blog. Note this will also help if you have buckets in different regions/accounts and would like one centralized technology queue for all of your logging. |
Create an SQS queue for a specific service events type (i.e. CloudTrail)
In this example, we will continue by setting up an SQS queue for our CloudTrail technology logs.
Rw ui steps macro | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Navigate to the SQS console.
Click Create queue.
Create a Standard queue, the default configuration is fine.
In the Access policy section, select Advanced and copy and paste the following policy replacing where {{ }} occurs.
The rest of the default configuration is fine, but you can set up a dead letter queue and server-side encryption, which is transparent to our side.
Create the queue.
Copy the URL of your newly created queue and save it, as you will need to provide Devo with this. |
Setup S3 event notifications
Rw ui steps macro | |||||||
---|---|---|---|---|---|---|---|
Navigate back to your S3 bucket with data in it.
Click the Properties tab of the bucket.
Click the Events box under Advanced settings.
Click Create event notification
Setup the event notifications similar to the following:
Click the Save button after configuring this.
CloudTrail trail logs should now be generating corresponding messages in the queue if all was properly configured. |
Enabling SQS and S3 access using a cross-account IAM role
For allowing the Devo collector to pull in data from your AWS environment, we will need an IAM cross-account role in your account. You will have to provide this role’s ARN to Devo.
Create an IAM policy
This IAM policy will:
Allow the role to read messages off the SQS queue and acknowledge (delete) them off the queue after successfully processing the messages
Retrieve the S3 object referenced in the SQS message so that Devo can read and process the message into the system
Provide limited access only to specified resources (minimal permissions)
Follow the next steps to create the IAM policy:
Rw ui steps macro | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
Navigate to the IAM console.
Go to the Policies section.
Create a policy.
Choose the JSON method and enter in the following policy while replacing the items within {{}} (ARN’s for the S3 bucket -optionally including configured prefix- and the SQS queue setup are in the previous steps of this guide).
You can keep adding more resources if you have multiple SQS queues and S3 buckets that you would like Devo to pull and read from.
Give the policy a name with the naming convention that your account uses as necessary and an optional description.
Click Create and note down the policy name you've created for the access method needed for the Devo collector's proper functioning. |
Create a cross-account role
Cross-account roles let roles/users from other AWS accounts (in this case, the Devo collector server AWS Account) access to assume a role in your account. This sidesteps the need to exchange permanent credentials, as credentials are still stored separately in their respective accounts, and AWS themselves authenticates the identities. For more information, check this document.
Follow these steps to create the cross-account role:
Rw ui steps macro | |||||||||
---|---|---|---|---|---|---|---|---|---|
Click Roles in the IAM console, then select Create role.
Create a role with the Another AWS account scope and use Account ID:837131528613
Attach the policy you created in the previous steps (i.e.: devo-xaccount-cs-policy)
Give this role a name (you will provide this to Devo)
Go into the newly created role and click Trust relationships → Edit trust relationship.
Change the existing policy document to the following, which will only allow for our collector server role to access the policy.
Click Update Trust Policy to finish. |
Information to be provided to Devo
At the end of this configuration process, the following tidbits of information will have to be provided to Devo for the collector setup in order to complete the integration:
Technology type that we will be consuming, or log format (in case the collector is pulling data from an AWS service - i.e: this guide is using CloudTrail as an example-, just the service name must be provided)
SQS Queue URL
Cross-account role ARN (i.e.: arn:aws:iam::<YOUR-ACCOUNT-ID>:role/devo-xs-collector-role) and optionally, ExternalID (if used in cross account role trust policy)
...
|
Purpose
AWS SQS can be used to send any kind of data to Devo. If the data is already located in AWS, then SQS should be used to send it to Devo. The AWS SQS collector provides superior reliability, speed, security, and flexibility.
The AWS SQS collector is commonly used to secure services like WAF, VPC, Control Tower, and CloudTrail.
Send data to Devo
There are three requirements to send data to Devo with SQS.
Enable the collector with the service matching the data format.
Place data in the S3 bucket.
Data sources
Data source | Security Purpose | Collector service name | Devo table |
---|---|---|---|
Any | The collector can be customized to process any data. Use a custom service only if there is no prebuilt service. | | All |
Cloud Resource Audit |
|
| |
Load Balancer |
|
| |
Load Balancer |
|
| |
DNS |
|
| |
Content Distribution |
|
| |
Content Distribution |
|
| |
AWS Audit |
|
| |
CLOUDTRAIL VIA KINESIS FIREHOSE | AWS Audit |
|
|
Instance Metrics |
|
| |
CLOUDWATCH VPC | Private Cloud Metrics |
|
|
In most cases, use the CloudTrail service instead. VPC Flow Logs, Cloudtrail, Cloudfront, and/or AWS config logs |
|
| |
deprecated |
|
|
|
Antivirus |
|
| |
Threat Detection |
|
| |
GUARD DUTY VIA KINESIS FIREHOUSE |
|
|
|
Content Delivery |
|
| |
Container and Cloud |
|
| |
Firewall |
|
| |
Domain Name Service |
|
| |
OPERATING SYSTEM | Windows and Unix events |
|
|
Endpoint Detections |
|
| |
S3 Bucket Audit |
|
| |
Private Cloud Metrics |
|
| |
Firewall |
|
|
Devo collector features
Feature | Details |
---|---|
Allow parallel downloading ( |
|
Running environments |
|
Writes to |
|