Overview
Data source | Description | Collector service name | Devo table | Available from |
---|---|---|---|---|
Any | Any source you send to an SQS can be collected. |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| VPC Flow Logs, Cloudtrail, Cloudfront, and/or AWS config logs |
|
|
|
|
|
|
|
|
| The files can be so large and hard to pull that if the service above fails, use this one. |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| Relational Database Audit Logs |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
For each setup, you can use this general config:
{ "global_overrides": { "debug": false }, "inputs": { "sqs_collector": { "id": "34523", "enabled": true, "credentials": { "aws_cross_account_role": "if provided", "aws_external_id": "if needed/supplied" }, "region": "us-east-2", "base_url": "https://sqs.us-east-2.amazonaws.com/", "sqs_visibility_timeout": 120 "sqs_wait_timeout": 20 "sqs_max_messages": 1 "ack_messages": false "direct_mode": false "do_not_send": false "compressed_events": false "debug_md5": false, "services": { "aws_sqs_kubernetes": { "encoding": "gzip", "type": "unseparated_json_processor", "config": { "key": "logEvents" } } } } } }
The services are listed above. Every part of the service is overridable, so if you need to change the encoding, you can do it freely. You can also leave the service as "service_name": {}
Custom services or overrides
For a custom service or override, the config can look like this:
"services": { "custom_service": { "file_field_definitions": {}, "filename_filter_rules": [], "encoding": "parquet", "file_format": { "type": "line_split_processor", "config": {"json": true} }, "record_field_mapping": {}, "routing_template": "my.app.ablo.backend", "line_filter_rules": [] } }
The main things you need:
file_format
is type of processorrouting_template
is the tag you need
Collectors that need custom tags
aws_sqs_alb
web.aws.alb.access.SQS_REGION.SQS_ACCID
SQS_REGION
needs to be filled inSQS_ACCID
needs to be filled in
aws_sqs_elb
web.aws.alb.access.SQS_REGION.SQS_ACCID
SQS_REGION
needs to be filled inSQS_ACCID
needs to be filled in
aws_sqs_rds
cloud.aws.rds.audit.SQS_REGION.SQS_ACCID
SQS_REGION
needs to be filled inSQS_ACCID
needs to be filled inIt is possible to put in information about the database that it’s coming from, it doesn’t have to be account IDs.
Types of processors
rds_processor
- RDS processor for the RDS serviceunseparated_json_processor
. Use this if the events come in one massive JSON object.split_or_unseparated_processor
- This will determine if the log is split by \n or not.aws_access_logs_processor
- For AWS access logs and \n splits.single_json_object_processor
- This is for one JSON object.separated_json_processor
- Similar to other separators.bluecoat_processor
- For Blue Coat recipe.json_object_to_linesplit_processor
- Split by configured value.json_array_processor
- For JSON array processorsjson_line_arrays_processor
- Similar to other separatorsjamf_processor
- Jamf log processing.parquet_processor
- Parquet encoding.guardduty_processor
For Guardduty processors.vpc_flow_processor
- VPC service processoralt_vpc_flow_processor
- VPC service processorkolide_processor
- For Kolide servicejson_array_vpc_processor
- VPC service processor
Tagging
Tagging can be done in many different ways. One way tagging works is by using the file field definitions:
"file_field_definitions": { "log_type": [ { "operator": "split", "on": "/", "element": 2 } ] },
These are the elements of the filename
object:
If you look at the highlighted object filename, you can see that we are splitting and looking for the 2nd value. This starts at 0 like arrays. So:
0 =
cequence-data
1 =
cequence-devo-6x-NAieMI
2 =
detector
"routing_template": "my.app.test_cequence.[file-log_type]"
Our final tag is my.app.test_cequence.detector