The SQS collector can be configured to write any log to any table. Devo recommends use of a pre-built service that fits your logs. If the pre-built services do not fit, you should engage Devo professional services to create a custom service.
If you need to modify or filter logs, Devo recommends AWS Lambda.
Authorize SQS Data Access.
Add data to the S3 bucket. Preferably, the data should be in a consistent format. For example:
If the data is JSON objects, the keys of the JSON objects should be the same. Some objects can omit some keys.
If the data is comma separated value format, the number of columns must always be the same.
Get a log sample from the S3 bucket.
Determine if the S3 contents are compressed.
Choose a destination tag. Contact us for assistance checking for an existing tag or use a my.app tag.
In the Cloud Collector App, create an SQS Collector instance using this parameters template, replacing the values enclosed in < >
.
{ "inputs": { "sqs_collector": { "id": "<FIVE_UNIQUE_DIGITS>", "services": { "custom_service": {<OPTIONS>, "routing_template": "<DESTINATION TAG>" } }, "credentials": { "aws_cross_account_role": "arn:<PARTITION>:iam::<YOUR_AWS_ACCOUNT_NUMBER>:role/<YOUR_ROLE>", "aws_external_id": "<EXTERNAL_ID>" }, "region": "<REGION>", "base_url": "https://sqs.<REGION>.amazonaws.com/<YOUR_AWS_ACCOUNT_NUMBER>/<QUEUE_NAME>" } } } |
{ "global_overrides": { "debug": false }, "inputs": { "sqs_collector": { "id": "12351", "enabled": true, "credentials": { "aws_access_key_id": "", "aws_secret_access_key": "", "aws_base_account_role": "arn:aws:iam::837131528613:role/devo-xaccount-cs-role", "aws_cross_account_role": "", "aws_external_id": "" }, "ack_messages": true, "direct_mode": false, "do_not_send": false, "compressed_events": false, "base_url": "https://us-west-1.queue.amazonaws.com/id/name-of-queue", "region": "us-west-1", "sqs_visibility_timeout": 240, "sqs_wait_timeout": 20, "sqs_max_messages": 1, "services": { "custom_service": { "file_field_definitions": { "log_type": [ { "operator": "split", "on": "/", "element": 0 }, { "operator": "replace", "to_replace": "_", "replace_with": "" } ] }, "filename_filter_rules": [ [ { "type": "match", "pattern": "CloudTrail-Digest" } ], [ { "type": "match", "pattern": "ConfigWritabilityCheckFile" } ] ], "encoding": "gzip", "send_filtered_out_to_unknown": false, "file_format": { "type": "line_split_processor", "config": { "json": true } }, "record_field_mapping": { "event_simpleName": { "keys": [ "event_simpleName" ] } }, "routing_template": "destination tag", "line_filter_rules": [ [ { "source": "record", "key": "event_simpleName", "type": "match", "value": "EndOfProcess" } ], [ { "source": "record", "key": "event_simpleName", "type": "match", "value": "DeliverLocalFXToCloud" } ] ] } } } } } |
|
Processors are selected in the type
section within file_format
. The processor must match the format of the event in the queue.
Custom Service Options
Automatic TaggingTags can be generated using record field mapping or file field definitions. File name split
If the filename field is
then
results in tag File name split and replaceIf the filename field is
it is helpful to remove the unwanted special character when creating the tag. For example,
combined with
will result in
Options for filteringLine-level filtersThese are a list of rules for filtering out single events. |
We want to discard all the events that match these conditions:
In Devo, these criteria are specified with the next query. If everything is OK, after configuring the collector properly, there should not be any event if we run this query:
In this case, the key for the filter is the
Elements in different lists are
|
What if we want to filter out the events that match this pseudocode query that has mixed conditions?
In this case, the keys for the filter are
Elements in different lists are
|
These are a list of rules to filter out entire files by the specified pattern applied over the file name.
This will filter out files that contain
|
This will filter out files that do not contain
If something seems wrong at launch, you can set the following in the collector parameters/job config:
This will print out data as it is being processed, stop messages from getting hacked, and at the last step, data won’t send the data. In this way, you can easily check if something is not working properly. |