...
| This is if the events come in json in one massive object use this. |
| This will determine if the log is split by |
| For AWS access logs and \n splits. |
| This is for one JSON object. |
| Similar to other separators. |
| For Bluecoat recipe. |
| Split by configured value. |
| For JSON array processors |
| Similar to other separators |
| Jamf log processing. |
| Parquet encoding. |
| For GuardDuty processors. |
| VPC service processor. |
| VPC service processor. |
| For Kolide service. |
| VPC service processor. |
| RDS processor for the RDS service |
Tagging
Tagging can be done in many different ways. One way tagging works is by using the file field definitions:
Code Block |
---|
"file_field_definitions": {
"log_type": [
{
"operator": "split",
"on": "/",
"element": 2
}
]
}, |
These are the elements of the filename
object:
...
If you look at the highlighted object filename, you can see that we are splitting and looking for the 2nd value. This starts at 0 like arrays. So:
0 =
cequence-data
1 =
cequence-devo-6x-NAieMI
2 =
detector
"routing_template": "my.app.test_cequence.[file-log_type]"
Our final tag is my.app.test_cequence.detector
Options for filtering
...
direct_mode
...
Allowed values are true
or false
(default value is false
). Set to true
if the logs are sent directly to the queue without using s3.
...
file_field_definitions
...
Defined as a dictionary mapping of variable names (you decide) that lists parsing rules.
Each parsing rule has an operator with its own keys. Parsing rules are applied in the order they are listed in the configuration.
The
split
operator uses theon
andelement
keys. The file name will split into pieces considering the character or character sequence specified in theon
key, and will extract whatever it is at the specifiedelement
index, as in the example below.The
replace
operator uses theto_replace
andreplace_with
keys.
For example, if your filename is server_logs/12409834/ff.gz
, this configuration would store the log_type
as serverlogs
:
Code Block |
---|
"file_field_definitions":
{
"log_type": [{"operator": "split", "on": "/", "element": 0}, {"operator": "replace", "to_replace": "_", "replace_with": ""}]
} |
...
filename_filter_rules
...
A list of rules to filter out entire files.
...
encoding
...
Takes any string. List of most common to least common: gzip
, none
, parquet
, latin-1
...
ack_messages
...
Decide whether or not to delete messages from the queue after processing. It takes boolean values. If not specified, default is true
. We recommend leaving this out of the config. If you see it in there, pay close attention to if it’s on or off.
...
file_format
type
- A string specifying which processor to use.
...
single_json_object
- Logs are stored as/in a JSON object.
single_json_object_processor
config options:
key
-(string
) The key of where the list of logs is stored.
Code Block |
---|
config: {"key": "log"}
fileobj: {..."log": {...}} |
...
unseparated_json_processor
- Logs are stored as/in JSON objects, which are written in a text file with no separator.
unseparated_json
config options:
key
- (string
) where the log is storedinclude
(dict: maps names of keys outside of inner part to be included, which can be renamed).
If there is no key
, that is, the whole JSON object is the desired log, set "flat": true
See aws_config_collector
for example:
Code Block |
---|
fileobj: {...}{...}{...} |
...
text_file_processor
- logs are stored as text files, potentially with lines and fields separated with e.g. commas and newlines
text_file
config options: includes options for how lines and records are separated (e.g. newline, tab, comma), good for csv style data.
...
line_split_processor
–- logs stored in a newline separated file, works more quickly than separated_json_processor
config options: “json”: true or false. If setting json to true, assumes that logs are newline-separated json, and allows them to be parsed by the collector therefore enabling record-field mapping
...
separated_json_processor
– logs stored as many json objects that have some kind of separator
config options: specify the separator e.g. “separator”: “||”. the default is newline if left unused.
Code Block |
---|
fileobj: {...}||{...}||{...} |
...
jamf_processor
– special processor for JAMF logs
...
aws_access_logs_processor
– special processor for AWS access logs
...
windows_security_processor
– special processor for Windows Security logs
...
vpc_flow_processor
– special processor for VPC Flow logs
...
json_line_arrays_processor
– processor for unseparated json objects that are on multiple lines of a single file.
Code Block |
---|
fileobj: {...}{...}
{...}{...}{...}
{...} |
...
dict_processor
– processor for logs that comes as python dictionary objects, i.e. in direct mode
...
config
- a dictionary of information the specified file_format processor needs
...
record_field_mapping
...
a dictionary -- each key defines a variable that can be parsed out from each record (which may be referenced later in filtering)
e.g., we may want to parse something and call it "type", by getting "type" from a certain key in the record (which may be multiple layers deep).
Code Block |
---|
{"type": {"keys": ["file", "type"]}, "operations": [] } |
keys is a list of how key values in the record to look into to find the value, its to handle nesting (essentially defining a path through the data). Suppose we have logs that look like this:
Code Block |
---|
{“file”: {“type”: { “log_type” : 100}}} |
so if we want to get the log_type, we should list all the keys needed to parse through the json in order:
Code Block |
---|
keys: [“file”, “type”, “log_type”] |
In many cases you will probably only need one key.
e.g. in flat json that isn’t nested
Code Block |
---|
{“log_type”: 100, “other_info”: “blah” ….} |
here you would just specify keys: [“log_type”]. A few operations are supported that can be used to further alter the parsed information (like split and replace). This snippet would grab whatever is located at log[“file”][“type”] and name it as “type”. record_field_mapping defines variables by taking them from logs, and these variables can then be used for filtering. Let’s say you have a log in json format like this which will be set to devo:
Code Block |
---|
{“file”: {“value”: 0, “type”: “security_log”}} |
Specifying “type” in the record_field_mapping will allow the collector to extract that value, “security_log” and save it as type. Now let’s say you want to change the tag dynamically based on that value. You could change the routing_template to something like my.app.datasource.[record-type]. In the case of the log above, it would be sent to my.app.datasource.security_log. Now let’s say you want to filter out (not send) any records which have the type security_log. You could write a line_filter_rule as follows:
{"source": "record", "key": "type", "type": "match", "value": "security_log" }
We specified the source as record because we want to use a variable from the record_field_mapping. We specified the key as “type” because that is the name of the variable we defined. We specify type as “match” because any record matching this rule we want to filter out. And we specify the value as security_log because we specifically do not want to send any records with the type equalling “security_log” The split operation is the same as if you ran the python split function on a string.
Let’s say you have a filename “logs/account_id/folder_name/filename” and you want to save the account_id as a variable to use for tag routing or filtering.
You could write a file_field_definition like this:
"account_id": [{"operator": "split", "on": "/", "element": 1}]
This would store a variable called account_id by taking the entire filename and splitting it into pieces based on where it finds backslashes, then take the element as position one. In Python it would look like:
Code Block |
---|
filename.split(“/”)[1] |
...
routing_template
...
More on processors:
|
|
| ||||||||||||||||
If there is no See
| ||||||||||||||||||
| ||||||||||||||||||
config options: “json”: true or false. If setting json to true, assumes that logs are newline-separated json, and allows them to be parsed by the collector therefore enabling record-field mapping | ||||||||||||||||||
config options: specify the separator e.g. “separator”: “||”. the default is newline if left unused.
| ||||||||||||||||||
| ||||||||||||||||||
| ||||||||||||||||||
| ||||||||||||||||||
| ||||||||||||||||||
| ||||||||||||||||||
| ||||||||||||||||||
| ||||||||||||||||||
| A dictionary where each key defines a variable that can be parsed out from each record (which may be referenced later in filtering). For example, we may want to parse something and call it type by getting type from a certain key in the record (which may be multiple layers deep).
The keys are a list of how to find a value and handle nesting (essentially, defining a path through the data). Suppose we have logs that look like this:
If we want to get the
In many cases, you will probably only need one key, for example, in a flat JSON that isn’t nested:
Here you would just specify This snippet would grab whatever is located at Let’s say you have a log in JSON format like this which will be set to Devo:
Specifying type in Now let’s say you want to change the tag dynamically based on that value. You could change the Now let’s say you want to filter out (not send) any records which have the type
The Let’s say you have a filename You could write a
This would store a variable called
|
Tagging
Tagging can be done in many different ways. One way tagging works is by using the file field definitions:
Code Block |
---|
"file_field_definitions": {
"log_type": [
{
"operator": "split",
"on": "/",
"element": 2
}
]
}, |
These are the elements of the filename
object:
...
If you look at the highlighted object filename, you can see that we are splitting and looking for the 2nd value. This starts at 0 like arrays. So:
0 =
cequence-data
1 =
cequence-devo-6x-NAieMI
2 =
detector
"routing_template": "my.app.test_cequence.[file-log_type]"
Our final tag is my.app.test_cequence.detector
Here is another example:
| Defined as a dictionary mapping of variable names (you decide) that lists parsing rules. Each parsing rule has an operator with its own keys. Parsing rules are applied in the order they are listed in the configuration.
For example, if your filename is
| |||
| A string defining how to build the tag to send each message, for example, |
If the |
was |
, the record would be sent to the tag |
|
line_filter_rules
Options for filtering
Line-level filters
These are a list of
...
rules for filtering out
...
single events.
Expand | ||
---|---|---|
| ||
We want to discard all the events that match these conditions:
|
...
In Devo, these criteria are specified with the next query. If everything is OK, after configuring the collector properly, there should not be any event if we run this query:
In this case, the key for the filter is the
Elements in different lists are
|
Expand | ||||||
---|---|---|---|---|---|---|
| ||||||
What if we want to filter out the events that match this pseudocode query that has mixed conditions?
In this case, the keys for the filter are
Elements in different lists are
|
File-level filters
These are a list of rules to filter out entire files by the specified pattern applied over the file name.
Expand | ||
---|---|---|
| ||
This will filter out files that contain
|
Expand | ||
---|---|---|
| ||
|
...
|
...
|
...
|
...
|
...
This |
...
will filter out files that do not contain
|
...
|
...
|
...
If something seems wrong at launch, you can set the following in the collector parameters/job config |
...
:
|
...
This will print out data as it is being processed, stop messages from getting hacked, and at the last step, |
...
data won’t send the data |
...
. In this way, you can |
...
easily check if something is |
...
not working properly. |