...

Data source

Description

Collector service name

Devo table

Available from release

Any

Theoretically any source you send to an SQS can be collected

 

 

v1.0.0

CONFIG LOGS

 

aws_sqs_config

cloud.aws.configlogs.events

v1.0.0

AWS ELB

 

aws_sqs_elb

web.aws.elb.access

v1.0.0

AWS ALB

 

aws_sqs_alb

web.aws.alb.access

v1.0.0

CISCO UMBRELLA

 

aws_sqs_cisco_umbrella

sig.cisco.umbrella.dns

v1.0.0

CLOUDFLARE LOGPUSH

 

aws_sqs_cloudflare_logpush

cloud.cloudflare.logpush.http

v1.0.0

CLOUDFLARE AUDIT

 

aws_sqs_cloudflare_audit

cloud.aws.cloudflare.audit

v1.0.0

CLOUDTRAIL

 

aws_sqs_cloudtrail

cloud.aws.cloudtrail.*

v1.0.0

CLOUDTRAIL VIA KINESIS FIREHOSE

 

aws_sqs_cloudtrail_kinesis

cloud.aws.cloudtrail.*

v1.0.0

CLOUDWATCH

 

aws_sqs_cloudwatch

cloud.aws.cloudwatch.logs

v1.0.0

CLOUDWATCH VPC

 

aws_sqs_cloudwatch_vpc

cloud.aws.vpc.flow

v1.0.0

CONTROL TOWER

VPC Flow Logs, Cloudtrail, Cloudfront, and/or AWS config logs

aws_sqs_control_tower

 

v1.0.0

FDR

 

aws_sqs_fdr

edr.crowdstrike.cannon

v1.0.0

GUARD DUTY

 

aws_sqs_guard_duty

cloud.aws.guardduty.findings

v1.0.0

GUARD DUTY VIA KINESIS FIREHOUSE

 

aws_sqs_guard_duty_kinesis

cloud.aws.guardduty.findings

v1.0.0

IMPERVA INCAPSULA

 

aws_sqs_incapsula

cef0.imperva.incapsula

v1.0.0

LACEWORK

 

aws_sqs_lacework

monitor.lacework.

v1.0.0

PALO ALTO

 

aws_sqs_palo_alto

firewall.paloalto.[file-log_type]

v1.0.0

ROUTE 53

 

aws_sqs_route53

dns.aws.route53

v1.0.0

OS LOGS

 

aws_sqs_os

box.[file-log_type].[file-log_subtype].us

v1.0.0

SENTINEL ONE FUNNEL

 

aws_sqs_s1_funnel

edr.sentinelone.dv

v1.0.0

S3 ACCESS

 

aws_sqs_s3_access

web.aws.s3.access

v1.0.0

VPC LOGS

 

aws_sqs_vpc

cloud.aws.vpc.flow

v1.0.0

WAF LOGS

 

aws_sqs_waf

cloud.aws.waf.logs

v1.0.0

Options

See examples of common configurations here: General S3 Collector Configuration Examples and Recipes
There are many configurable options outlined in the README on the GitLab link, reproduced here. See GitLab repository for specific examples in each subdirectory.

  • direct_mode --- true or false (default is false), set to true if the logs are being sent directly to the queue without using s3.

  • file_field_definitions --- defined as a dictionary mapping variable names (you decide) to lists of parsing rules.
    each parsing rule has an operator, with its own keys which go along with it. Parsing rules are applied in the order they are listed in the configuration.

    • The "split" operator takes an "on" and an "element" -- the file name will split into pieces on the character or character sequence specified by "on" and extract whatever is at the specified "element" index as in the example.

    • the "replace" operator take a "to_replace" and a "replace_with"

    • For example, if your filename were "server_logs/12409834/ff.gz", this configuration would store the log_type as "serverlogs"

Code Block
"file_field_definitions": 
{
	"log_type": [{"operator": "split", "on": "/", "element": 0}, {"operator": "replace", "to_replace": "_", "replace_with": ""}]
}
  • filename_filter_rules: a list of rules for filtering out entire files.

  • encoding -- takes a string from one of the following: “gzip” “none” “parquet”

  • ack_messages -- whether or not to delete messages from the queue after processing, takes boolean values. If not specified, default is true. We recommend leaving this out of the config. If you see it in there, pay close attention to if it’s on or off.

  • file_format -- takes a dictionary with the following keys

    • type -- a string specifying which processor to use

      • single_json_object -- logs are stored as/in a json object

        • single_json_object_processor config options: “key” (string: the key of where the list of logs is stored) See cloudtrail_collector for example.

          • Code Block
            config: {"key": "log"}
            fileobj:  {..."log": {...}}
      • unseparated_json_processor -- logs are stored as/in json objects which are written in a text file with no separator

        • unseparated_json config options: “key” (string: where the log is stored), “include” (dict: maps names of keys outside of inner part to be included, which can be renamed). If there is no key, that is, the whole JSON object is the desired log, set “flat”: true See aws_config_collector for example

          • Code Block
            fileobj:  {...}{...}{...}
      • text_file_processor -- logs are stored as text files, potentially with lines and fields separated with e.g. commas and newlines

        • text_file config options: includes options for how lines and records are separated (e.g. newline, tab, comma), good for csv style data.

      • line_split_processor –- logs stored in a newline separated file, works more quickly than separated_json_processor

        • config options: “json”: true or false. If setting json to true, assumes that logs are newline-separated json, and allows them to be parsed by the collector therefore enabling record-field mapping

      • separated_json_processor – logs stored as many json objects that have some kind of separator

        • config options: specify the separator e.g. “separator”: “||”. the default is newline if left unused.

          • Code Block
            fileobj:  {...}||{...}||{...}
      • jamf_processor – special processor for JAMF logs

      • aws_access_logs_processor – special processor for AWS access logs

      • windows_security_processor – special processor for Windows Security logs

      • vpc_flow_processor – special processor for VPC Flow logs

      • json_line_arrays_processor – processor for unseparated json objects that are on multiple lines of a single file

        • Code Block
          fileobj:  {...}{...}
          {...}{...}{...}
          {...}
      • dict_processor – processor for logs that comes as python dictionary objects, i.e. in direct mode

    • config -- a dictionary of information the specified file_format processor needs

  • record_field_mapping -- a dictionary -- each key defines a variable that can be parsed out from each record (which may be referenced later in filtering)
    e.g., we may want to parse something and call it "type", by getting "type" from a certain key in the record (which may be multiple layers deep).

    Code Block
    {"type": {"keys": ["file", "type"]},	"operations": []	}

    keys is a list of how key values in the record to look into to find the value, its to handle nesting (essentially defining a path through the data). Suppose we have logs that look like this:

    Code Block
    {“file”: {“type”: { “log_type” : 100}}}

    so if we want to get the log_type, we should list all the keys needed to parse through the json in order:

    Code Block
    keys: [“file”, “type”, “log_type”]

    In many cases you will probably only need one key.

    e.g. in flat json that isn’t nested

    Code Block
    {“log_type”: 100, “other_info”: “blah” ….}

    here you would just specify keys: [“log_type”]. A few operations are supported that can be used to further alter the parsed information (like split and replace). This snippet would grab whatever is located at log[“file”][“type”] and name it as “type”. record_field_mapping defines variables by taking them from logs, and these variables can then be used for filtering. Let’s say you have a log in json format like this which will be set to devo:

    Code Block
    {“file”: {“value”: 0, “type”: “security_log”}}

    Specifying “type” in the record_field_mapping will allow the collector to extract that value, “security_log” and save it as type. Now let’s say you want to change the tag dynamically based on that value. You could change the routing_template to something like my.app.datasource.[record-type]. In the case of the log above, it would be sent to my.app.datasource.security_log. Now let’s say you want to filter out (not send) any records which have the type security_log. You could write a line_filter_rule as follows:

    {"source": "record", "key": "type", "type": "match", "value": "security_log" } We specified the source as record because we want to use a variable from the record_field_mapping. We specified the key as “type” because that is the name of the variable we defined. We specify type as “match” because any record matching this rule we want to filter out. And we specify the value as security_log because we specifically do not want to send any records with the type equalling “security_log” The split operation is the same as if you ran the python split function on a string.

    Let’s say you have a filename “logs/account_id/folder_name/filename” and you want to save the account_id as a variable to use for tag routing or filtering.

    You could write a file_field_definition like this:

    "account_id": [{"operator": "split", "on": "/", "element": 1}]

    This would store a variable called account_id by taking the entire filename and splitting it into pieces based on where it finds backslashes, then take the element as position one. In Python it would look like:

    Code Block
    filename.split(“/”)[1]
  • routing_template -- a string defining how to build the tag to send each message. e.g.
    "my.app.wow.[record-type].[file-log_type]" -- if the "type" extracted during record_field_mapping were "null", the record would be sent to the tag "my.app.wow.null"

  • line_filter_rules -- a list of lists of rules for filtering out individual records so they do not get sent to devo
    for example:

Code Block
"line_filter_rules": [
	[{
        "source": "record",
        "key": "type",
        "type": "doesnotmatch",
        "value": "ldap"
      }],
    [
      {"source": "file", "key": "main-log_ornot", "type": "match", "value": "main-log"},
      {"source": "record", "key": "type", "type": "match", "value": "kube-apiserver-audit"},
    ]
  ]

This set of rules could be expressed in pseudocode as follows:
if record.type != "ldap" OR (file.main-log_ornot == main-log AND record.type == "kube-api-server-audit"):
do_not_send_record()

(Internal) Notes + Debugging
Config can include "debug_mode": true to print out some useful information as logs come in.
For local testing it is useful to set "ack_messages" to false, to try processing without eating from the queue. Be careful to remove this or set it to true when launching the collector. The default is to ack messages if it is not set.

If something seems wrong at launch, you can set the following in the collector parameters/ job config.

"debug_mode": true,
"do_not_send": true,
"ack_messages": false

...

Run the collector

Rw ui tabs macro
Rw tab
titleOn-premise collector

This data collector can be run in any machine that has the Docker service available because it should be executed as a docker container. The following sections explain how to prepare all the required setup for having the data collector running.

Structure

The following directory structure should be created for being used when running the collector:

Code Block
<any_directory>
└── devo-collectors/
    └── <product_name>/
        ├── certs/
        │   ├── chain.crt
        │   ├── <your_domain>.key
        │   └── <your_domain>.crt
        ├── state/
        └── config/ 
            └── config.yaml 
Note

Replace <product_name> with the proper value.

Devo credentials

In Devo, go to Administration → Credentials → X.509 Certificates, download the Certificate, Private key and Chain CA and save them in <product_name>/certs/. Learn more about security credentials in Devo here.

Note

Replace <product_name> with the proper value.

Editing the config.yaml file

Code Block
globals:
  debug: <debug_status>
  id: <collector_id>
  name: <collector_name>
  persistence:
    type: filesystem
    config:
      directory_name: state
  multiprocessing: false
  queue_max_size_in_mb: 1024
  queue_max_size_in_messages: 1000
  queue_max_elapsed_time_in_sec: 60
  queue_wrap_max_size_in_messages: 100

outputs:
  devo_1:
    type: devo_platform
    config:
      address: <devo_address>
      port: 443
      type: SSL
      chain: <chain_filename>
      cert: <cert_filename>
      key: <key_filename>

inputs:
  sqs:
    id: 12345
    enabled: true
    credentials:
      aws_access_key_id: password
      aws_secret_access_key: secret-access-key
      aws_base_account_role: arn:aws:iam::837131528613:role/devo-xaccount-cs-role
      aws_cross_account_role: arn:aws:iam::{account-id}:role/{role-name}
      aws_external_id: extra_security_optional
    region: region
    base_url: https://sqs.{region}.amazonaws.com/{account-number}/{queue-name}
    sqs_visibility_timeout: 120
    sqs_wait_timeout: 20
    sqs_max_messages: 4
    ack_messages: false
    direct_mode: false
    do_not_send: false
    compressed_events: false
    services:
      custom_service:
        file_field_definitions: {}
        filename_filter_rules: []
        encoding: gzip
        ack_messages: false
        file_format:
          type: single_json_object_processor
          config:
            key: Records
        record_field_mapping: {}
        routing_template: my.app.source1.type1
        line_filter_rules: []
Info

All defined service entities will be executed by the collector. If you do not want to run any of them, just remove the entity from the services object.

Replace the placeholders with your required values following the description table below:

Parameter

Data type

Type

Value range

Details

debug_status

bool

Mandatory

false / true

If the value is true, the debug logging traces will be enabled when running the collector. If the value is false, only the info, warning and error logging levels will be printed.

collector_id

int

Mandatory

Minimum length: 1
Maximum length: 5

Use this param to give an unique id to this collector.

collector_name

str

Mandatory

Minimum length: 1
Maximum length: 10

Use this param to give a valid name to this collector.

devo_address

str

Mandatory

collector-us.devo.io
collector-eu.devo.io

Use this param to identify the Devo Cloud where the events will be sent.

chain_filename

str

Mandatory

Minimum length: 4
Maximum length: 20

Use this param to identify the chain.cert  file downloaded from your Devo domain. Usually this file's name is: chain.crt

cert_filename

str

Mandatory

Minimum length: 4
Maximum length: 20

Use this param to identify the file.cert downloaded from your Devo domain.

key_filename

str

Mandatory

Minimum length: 4
Maximum length: 20

Use this param to identify the file.key downloaded from your Devo domain.

short_unique_id

int

Mandatory

Minimum length: 1
Maximum length: 5

Use this param to give an unique id to this input service.

Note

This parameter is used to build the persistence address, do not use the same value for multiple collectors. It could cause a collision.

input_status

bool

Mandatory

false / true

Use this param to enable or disable the given input logic when running the collector. If the value is true, the input will be run. If the value is false, it will be ignored.

base_url

str

Mandatory

 

By default, the base url is https://sqs.region.amazonaws.com/account-number/queue-name. This needs to be set to the url of sqs.

aws_access_key_id

str

Mandatory/Optional

Any

Only needed if not using cross account

aws_secret_access_key

str

Mandatory/Optional

Any

Only needed if not using cross account

aws_base_account_role

str

Mandatory/Optional

Any

Only needed if using cross account This is devos cross account role

aws_cross_account_role

str

Mandatory/Optional

Any

Only needed if using cross account This is your cross account role

aws_external_id

str

Optional

Any

Extra security you can set up

ack_messages

bool

Manatory

false / true

Needs to be set to true to delete messages from the queue. Leave false until testing complete

direct_mode

bool

Optional

false / true

Set to False for most all scenarios.

This parameter should be removed if it is not used.

do_not_send

bool

Optional

false / true

Set to True to not send the log to Devo.

This parameter should be removed if it is not used.

debug_md5

bool

Optional

false / true

Set to True to will send the message md5 to my.app.sqs.message_body only needed for more debugging on duplicates.

This parameter should be removed if it is not used.

sqs_visibility_timeout

int

Mandatory

Min: 120

Max: 43200 (haven’t needed to test higher)

Set this parameter for timeouts between the queue and the collector, the collector has to download large files and process them. If this process is broken up the time. Otherwise defaults to 120.

sqs_wait_timeout

int

Mandatory

Min: 20

Max: 20

The min has handled most customer scenarios at this point.

sqs_max_messages

int

Mandatory

Min: 1

Max: 6

This is now 1 always and forever.

region

str

Mandatory

Example:

us-east-1

This is the region that is in the base url

compressed_events

bool

Mandatory

This needs to be true or False

Only works with GZIP compression should be false unless you see this below.

If you see any errors ‘utf-8' codec can't decode byte 0xa9 in position 36561456: invalid start byte it might be the events need to be decompressed

Download the Docker image

The collector should be deployed as a Docker container. Download the Docker image of the collector as a .tgz file by clicking the link in the following table:

Collector Docker image

SHA-256 hash

collector-aws_sqs_if-docker-image-1.2.0

d4a462b75032731042a2ce3d82ca92e6a0ee1b8b099b5371ecaa7028bc843e4f

Use the following command to add the Docker image to the system:

Code Block
gunzip -c <image_file>-<version>.tgz | docker load
Note

Once the Docker image is imported, it will show the real name of the Docker image (including version info). Replace <image_file> and <version> with a proper value.

The Docker image can be deployed on the following services:

Docker

Execute the following command on the root directory <any_directory>/devo-collectors/<product_name>/

Code Block
docker run 
--name collector-<product_name> 
--volume $PWD/certs:/devo-collector/certs 
--volume $PWD/config:/devo-collector/config 
--volume $PWD/state:/devo-collector/state 
--env CONFIG_FILE=config.yaml 
--rm 
--interactive 
--tty 
<image_name>:<version>
Note

Replace <product_name>, <image_name> and <version> with the proper values.

Docker Compose

The following Docker Compose file can be used to execute the Docker container. It must be created in the <any_directory>/devo-collectors/<product_name>/ directory.

Code Block
version: '3'
services:
  collector-<product_name>:
    image: <image_name>:${IMAGE_VERSION:-latest}
    container_name: collector-<product_name>
    volumes:
      - ./certs:/devo-collector/certs
      - ./config:/devo-collector/config
      - ./credentials:/devo-collector/credentials
      - ./state:/devo-collector/state
    environment:
      - CONFIG_FILE=${CONFIG_FILE:-config.yaml}

To run the container using docker-compose, execute the following command from the <any_directory>/devo-collectors/<product_name>/ directory:

Code Block
IMAGE_VERSION=<version> docker-compose up -d
Note

Replace <product_name>, <image_name> and <version> with the proper values.

Rw tab
titleCloud collector

We use a piece of software called Collector Server to host and manage all our available collectors.

To enable the collector for a customer:

  1. In the Collector Server GUI, access the domain in which you want this instance to be created

  2. Click Add Collector and find the one you wish to add.

  3. In the Version field, select the latest value.

  4. In the Collector Name field, set the value you prefer (this name must be unique inside the same Collector Server domain).

  5. In the sending method select Direct Send. Direct Send configuration is optional for collectors that create Table events, but mandatory for those that create Lookups.

  6. In the Parameters section, establish the Collector Parameters as follows below:

Editing the JSON configuration

Code Block
{
  "global_overrides": {
    "debug": false
  },
  "inputs": {
    "sqs_collector": {
      "id": "12351",
      "enabled": true,
      "credentials": {
        "aws_access_key_id": "",
        "aws_secret_access_key": "",
        "aws_base_account_role": "arn:aws:iam::837131528613:role/devo-xaccount-cs-role",
        "aws_cross_account_role": "",
        "aws_external_id": ""
      },
      "ack_messages": false,
      "direct_mode": false,
      "do_not_send": false,
      "compressed_events": false,
      "debug_md5": true,
      "base_url": "https://us-west-1.queue.amazonaws.com/id/name-of-queue",
      "region": "us-west-1",
      "sqs_visibility_timeout": 240,
      "sqs_wait_timeout": 20,
      "sqs_max_messages": 1,
      "services": {
        "custom_service": {
          "file_field_definitions": {},
          "filename_filter_rules": [],
          "encoding": "gzip",
          "send_filtered_out_to_unknown": false,
          "file_format": {
            "type": "line_split_processor",
            "config": {
              "json": true
            }
          },
          "record_field_mapping": {
            "event_simpleName": {
              "keys": [
                "event_simpleName"
              ]
            }
          },
          "routing_template": "edr.crowdstrike.cannon",
          "line_filter_rules": [
            [
              {
                "source": "record",
                "key": "event_simpleName",
                "type": "match",
                "value": "EndOfProcess"
              }
            ],
            [
              {
                "source": "record",
                "key": "event_simpleName",
                "type": "match",
                "value": "DeliverLocalFXToCloud"
              }
            ]
          ]
        }
      }
    }
  }
}
Info

All defined service entities will be executed by the collector. If you do not want to run any of them, just remove the entity from the services object.

Note

Please replace the placeholders with real world values following the description table below

Parameter

Data type

Type

Value range / Format

Details

debug_status

bool

Mandatory

false / true

If the value is true, the debug logging traces will be enabled when running the collector. If the value is false, only the info, warning and error logging levels will be printed.

short_unique_id

int

Mandatory

Minimum length: 1
Maximum length: 5

Use this param to give an unique id to this input service.

Note

This parameter is used to build the persistence address, do not use the same value for multiple collectors. It could cause a collision.

enabled

bool

Mandatory

false / true

Use this param to enable or disable the given input logic when running the collector. If the value is true, the input will be run. If the value is false, it will be ignored.

base_url

str

Mandatory

 

By default, the base url is https://sqs.region.amazonaws.com/account-number/queue-name. This needs to be set to the url of sqs.

aws_access_key_id

str

Mandatory/Optional

Any

Only needed if not using cross account

aws_secret_access_key

str

Mandatory/Optional

Any

Only needed if not using cross account

aws_base_account_role

str

Mandatory/Optional

Any

Only needed if using cross account This is devos cross account role

aws_cross_account_role

str

Mandatory/Optional

Any

Only needed if using cross account This is your cross account role

aws_external_id

str

Optional

Any

Extra security you can set up

ack_messages

bool

Manatory

false / true

Needs to be set to true to delete messages from the queue. Leave false until testing complete

direct_mode

bool

Optional

false / true

Set to False for most all scenarios.

This parameter should be removed if it is not used.

do_not_send

bool

Optional

false / true

Set to True to not send the log to Devo.

This parameter should be removed if it is not used.

debug_md5

bool

Optional

false / true

Set to True to will send the message md5 to my.app.sqs.message_body only needed for more debugging on duplicates.

This parameter should be removed if it is not used.

sqs_visibility_timeout

int

Mandatory

Min: 120

Max: 43200 (haven’t needed to test higher)

Set this parameter for timeouts between the queue and the collector, the collector has to download large files and process them. If this process is broken up the time. Otherwise defaults to 120.

sqs_wait_timeout

int

Mandatory

Min: 20

Max: 20

The min has handled most customer scenarios at this point.

sqs_max_messages

int

Mandatory

Min: 1

Max: 6

This is now 1 always and forever.

region

str

Mandatory

Example:

us-east-1

This is the region that is in the base url

compressed_events

bool

Mandatory

This needs to be true or False

Only works with GZIP compression should be false unless you see this below.

If you see any errors ‘utf-8' codec can't decode byte 0xa9 in position 36561456: invalid start byte it might be the events need to be decompressed

...